CN109034055B - Portrait drawing method and device and electronic equipment - Google Patents

Portrait drawing method and device and electronic equipment Download PDF

Info

Publication number
CN109034055B
CN109034055B CN201810820310.6A CN201810820310A CN109034055B CN 109034055 B CN109034055 B CN 109034055B CN 201810820310 A CN201810820310 A CN 201810820310A CN 109034055 B CN109034055 B CN 109034055B
Authority
CN
China
Prior art keywords
image
screening
images
feature
description data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810820310.6A
Other languages
Chinese (zh)
Other versions
CN109034055A (en
Inventor
顾泽琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201810820310.6A priority Critical patent/CN109034055B/en
Publication of CN109034055A publication Critical patent/CN109034055A/en
Application granted granted Critical
Publication of CN109034055B publication Critical patent/CN109034055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The embodiment of the invention provides a portrait description method, a portrait description device and electronic equipment. The portrait drawing method includes: acquiring feature description data of a target object to be depicted, wherein the feature description data comprises description of facial features; obtaining a screening atlas in an image database according to the feature description data; sorting and screening the relevance of the screening atlas and the feature description data to obtain a target image set; and obtaining a description image of the target object according to the target image set.

Description

Portrait drawing method and device and electronic equipment
Technical Field
The invention relates to the field of image processing, in particular to a portrait description method and device and electronic equipment.
Background
In the case of unclear monitoring picture, if the portrait of a specific person is desired to be obtained, an estimated image of the specific person needs to be obtained according to the person description related to the specific person. However, the person related to the specific person may be influenced by the mood of the mood at the time and the growth of the specific person may not be clearly remembered, which may result in a large difference between the estimated image and the growth of the specific person.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a portrait rendering method and apparatus, and an electronic device.
In a first aspect, an embodiment of the present invention provides a portrait rendering method, including:
acquiring feature description data of a target object to be depicted, wherein the feature description data comprises description of facial features;
obtaining a screening atlas in an image database according to the feature description data;
sorting and screening the relevance of the screening atlas and the feature description data to obtain a target image set; and
and obtaining a description image of the target object according to the target image set.
Optionally, the step of rank-sorting and screening the relevance size of the screening atlas and the feature description data to obtain a target image set includes:
calculating the distance between the feature of the image in the screening graph set obtained according to the feature description data and the feature of the image corresponding to the feature description data;
and sorting the images in the screening image set according to the distance to obtain a designated number of images sorted in the front to form a target image set.
Optionally, the step of calculating a distance between a feature of an image in the screening graph set obtained according to the feature description data and a feature of an image corresponding to the feature description data includes:
receiving a calibration operation performed in the screening atlas, wherein the calibration operation is used for calibrating one or more calibration images in the screening atlas which are most similar to the target object, and the image corresponding to the feature description data comprises the one or more calibration images;
and calculating the distances between the features of the images in the screening image set except the calibration image and the features of the calibration image, wherein the distances of the calibration image are set to be zero.
Optionally, the step of calculating a distance between a feature of an image in the screening graph set obtained according to the feature description data and a feature of an image corresponding to the feature description data includes:
generating a simulated image according to the feature description data;
calculating distances between features of the images in the screening set and features of the simulated images.
Optionally, the step of rank-sorting and screening the relevance size of the screening atlas and the feature description data to obtain a target image set includes:
acquiring a target position where the target object appears;
and sorting the images in the screening atlas according to the collection positions of the images in the screening atlas and/or the distances between the features of the images in the screening atlas obtained according to the feature description data and the features of the images corresponding to the feature description data, and acquiring a specified number of images sorted at the top to form a target image set, wherein the closer the collection position is to the target position, the closer the corresponding image sorting position is, the higher the collection position is.
Optionally, the step of rank-sorting and screening the relevance size of the screening atlas and the feature description data to obtain a target image set includes:
acquiring behavior data of an object corresponding to the image in the screening graph set, wherein the behavior data comprises the correlation degree of the attribute carried by the target object;
and sequencing the images in the screening map set according to behavior data of an object corresponding to the images in the screening map set and/or one or two of the distances between the acquisition positions of the images and the distances between the features of the images in the screening map set obtained according to the feature description data and the features of the images corresponding to the feature description data, and acquiring a specified number of images sequenced at the top to form a target image set, wherein the higher the correlation carried by the behavior data is, the higher the sequencing position of the corresponding image is.
Optionally, the image database includes: each type of image set is used for embodying a certain five-sense organ feature of the human face, each type of image set comprises a plurality of groups of sub-image sets, and each sub-image set is used for describing the type of the certain five-sense organ feature; the step of obtaining the screening atlas in the image database according to the feature description data comprises the following steps:
acquiring a sub-image set which accords with the feature description data from at least one type of image set according to the feature description data;
and acquiring the image intersection in one or more sub-image sets to form a screening atlas.
Optionally, the image database includes: each type of image set is used for embodying a certain five-sense organ feature of the human face, each type of image set comprises a plurality of groups of sub-image sets, and each sub-image set is used for describing the type of the certain five-sense organ feature; the step of obtaining the screening atlas in the image database according to the feature description data comprises the following steps:
acquiring a sub-image set which accords with the feature description data from at least one type of image set according to the feature description data;
acquiring an image intersection in one or more sub-image sets to form a primary screening image set;
and receiving selection operation performed in the preliminary screening atlas, and acquiring a corresponding image according to the selection operation to form a screening atlas.
Optionally, the step of obtaining a description image of the target object according to the target image set includes:
displaying the target image set;
and receiving a selection operation in the target image set, and taking an image selected by the selection operation as a drawing image of a target object.
Optionally, the step of obtaining a description image of the target object according to the target image set includes:
displaying the target image set;
receiving a selection operation in the target image set;
and receiving an adjustment operation for adjusting the image selected by the selection operation, and taking the adjusted image as a drawing image of a target object.
Optionally, the method further comprises:
acquiring historical adjustment data corresponding to adjustment operation for adjusting the image selected by the selection operation;
analyzing according to the historical adjustment data to obtain a historical adjustment movement;
and modifying according to the historical adjustment trend for executing the step to obtain the screening standard in the screening image set in the image database according to the feature description data.
Optionally, the step of obtaining feature description data of a target object to be depicted includes:
receiving selection operation of features respectively selected from one or more types of feature groups, and forming feature description data according to a result corresponding to the selection operation, wherein each type of feature group is used for describing one type of feature of an object; or the like, or, alternatively,
and acquiring image data acquired by acquisition equipment, and analyzing the image data to obtain feature description data.
Optionally, before the step of obtaining the filtering atlas in the image database according to the feature description data, the method further includes:
receiving setting operation of characteristic parameters of the five sense organs, wherein the setting operation is used for setting different types of definition standards corresponding to each type of the five sense organs, and each definition standard forms a data structure;
and storing each image in the existing image set into a data structure corresponding to the definition standard which is met by the image set, wherein when any image meets a plurality of definition standards, the image is respectively stored in the data structures corresponding to the definition standards.
In a second aspect, an embodiment of the present invention provides a portrait drawing apparatus, including:
the data acquisition module is used for acquiring feature description data of a target object to be depicted, wherein the feature description data comprises description of facial features;
the screening module is used for obtaining a screening atlas in an image database according to the feature description data;
the sorting module is used for sorting and screening the relevance between the screening atlas and the feature description data to obtain a target image set;
and the obtaining module is used for obtaining a description image of the target object according to the target image set.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, performs the steps of the method described above.
Compared with the prior art, the portrait depicting method in the embodiment of the invention obtains the screened screening atlas from the image database by combining the feature description data, and further sorts the screening atlas, so that the relevance between the images in the screening atlas and the described feature description data is high and the images are arranged in a front position, and the accuracy of the obtained depicted image of the target object is further improved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present invention.
FIG. 2 is a flow chart of a portrait rendering method provided by an embodiment of the present invention.
Fig. 3 is a detailed flowchart of step S202 of the portrait rendering method according to the embodiment of the present invention.
Fig. 4 is a detailed flowchart of step S203 of the portrait rendering method according to the embodiment of the present invention.
FIG. 5 is a partial flow chart of a portrait rendering method provided by an embodiment of the present invention.
FIG. 6 is a partial flow chart of a portrait rendering method provided by an embodiment of the present invention.
FIG. 7 is a functional block diagram of a portrait rendering device according to an embodiment of the present invention.
FIG. 8 is a schematic diagram of a portion of the functional blocks of a portrait rendering device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
At present, every time a victim reports, police needs to be able to more accurately describe the long phase of a criminal and thereby depict a suspect portrait for wanted seizures. However, the inventor researches and discovers that the research shows that the victims may not better remember the long phase of the victims at the scene of the case due to emotional fear and tension and violent actions, which causes that the portrait images on the wanted are always far from the real long phase of the suspect, and thus the case solving difficulty is increased. Based on this, the technical problems that the embodiments of the present invention can solve are: under the condition that a field monitoring picture is not ideal, the accuracy of generating the suspect portrait by the police according to simple description of a victim is improved. The inventor researches and finds that one of the main reasons of low accuracy rate at present is that each time the portrait is drawn, the portrait starts from zero, and if a face image approximately conforming to the description of the victim can be provided first, the victim can change the place which is different from the suspect on the basis of the image, and on the basis of less image modification, the accuracy rate can be improved while the efficiency is improved. Based on the above description, the present application improves the way to effectively depict the portrait of the target object by the following several embodiments.
Example one
First, an example electronic device 100 for implementing a scene recognition method of an embodiment of the present invention is described with reference to fig. 1. The example electronic device 100 may be a computer, a mobile terminal such as a smart phone or a tablet computer, or an authentication device such as a witness integrated machine.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
For example, the devices in the electronic system implementing the portrait rendering method, apparatus and system according to the embodiments of the present invention may be integrated or distributed, such as integrating the processing device 102, the storage device 104, the input device 106 and the output device 108, and separating the image acquisition device 110.
Exemplary electronic devices for implementing the portrait rendering method and apparatus according to embodiments of the present invention may be implemented as Personal Computers (PCs), tablet PCs, smart phones, Personal Digital Assistants (PDAs), and the like.
Example two
The present embodiment provides a portrait rendering method, which may be performed by an electronic device.
In accordance with an embodiment of the present invention, there is provided an embodiment of a portrait rendering method, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Please refer to fig. 2, which is a flowchart illustrating a portrait rendering method according to an embodiment of the present invention. The specific process shown in fig. 2 will be described in detail below.
In step S201, feature description data of a target object to be drawn is acquired.
In this embodiment, the feature description data includes descriptions of features of five sense organs of the face. For example, the feature description data includes description data describing the size of the nose, the shape or collapse of the nose, the length of eyebrows, the shape of ears, the mouth shape, and the like of the target subject.
In this embodiment, the feature description data may include description parameters describing all facial features; or descriptive parameters including only some of the five sense organs.
In an application scenario, the target object may be a criminal suspect, and the criminal suspect of the target object is depicted by the method in this embodiment, so that relevant persons can find the criminal suspect as soon as possible.
In another application scenario, the target object may be a good-doing person, and the good-doing person of the target object is depicted by the method in this embodiment, so that the relevant person can find the good-doing person as soon as possible. For example, the doing good person may be a person who helps catch a thief, a person who helps drive a bad person, and so on.
In one embodiment, a selection operation for selecting features from one or more types of feature groups respectively is received, and feature description data is formed according to a result corresponding to the selection operation, wherein each type of feature group is used for describing one type of feature of an object. For example, a feature set may be a combination describing an eye, wherein the eye corresponds to a feature set comprising: large eye, small eye, medium eye, etc. In this embodiment, a plurality of types of options corresponding to each of the five sense organs may be displayed on the display interface. Specifically, each feature option in the feature group may display a text description, a graphic description, or a combination of text and graphic description.
In another embodiment, image data acquired by an acquisition device is acquired, and the image data is parsed to obtain feature description data. In an application scene, the acquisition device is a monitoring camera installed in a case scene, and the image data is a blurred image of the target object acquired by the monitoring camera. And classifying five sense organs in the blurred image by identifying the blurred image, and obtaining the feature description data according to a classification result.
And step S202, obtaining a screening atlas in an image database according to the feature description data.
In this embodiment, when the feature description data only includes descriptions of features of some five sense organs, an image may be recommended according to features of objects with the same attribute carried by the target object in the history data. For example, where the characterization data does not include mouth-type features, and the attributes carried by the target object are those that help victims drive victims, the historical data shows that people with thinner lips are the most likely people that help victims drive victims. The person image with thinner lips may be added to the screening graph set.
In one embodiment, the feature description data includes a nose support and a large eye. Screening all images meeting the double conditions of nose support and large eyes from the image database to form the screening atlas.
It will be appreciated that the more parameters in the profile data that describe the five sense organs, the fewer the number of screening atlas that may be obtained.
Step S203, rank-screening the relevance between the screening atlas and the feature description data to obtain a target image set.
In one embodiment, the size of the association between the filtering atlas and the feature description data may be calculated, the filtering atlas may be sorted according to the size of the association, and a specified number of images sorted in the top may be selected to form the target image set.
In another embodiment, the screening atlas is input into a pre-trained neural network recognition model to compute a specified number of image sets that are most relevant to the feature description data to form the target image set.
In another embodiment, a simulation image is generated according to the feature description data, the distance between the features of the images in the screening image set and the features of the simulation image is calculated, the images in the screening image set are sorted according to the distance, and a designated number of images sorted at the top are obtained to form a target image set, wherein the smaller the distance, the higher the sorting position. It will be appreciated that a smaller distance between a feature of the image and a feature of the simulated image indicates a higher degree of similarity between the image and the simulated image. Further, when the feature description data only includes the description parameters of some five sense organs, the possible images of any one of the five sense organs that are not described can be obtained first to piece up a complete face image. For example, if the user remembers only the eyebrow feature but forgets the eyes, several pieces can be randomly drawn for each common eye shape, or the most likely eye type to appear after the user specifies the eyebrow. Then, display is performed, and a user-selected image is received.
And step S204, obtaining a drawing image of the target object according to the target image set.
Further, the target image set represents a plurality of images that are most likely to be target objects.
If the user selects the target image set, and the image is determined to be the target object to be searched, the related information of the image can be called out from the database, and the retrieval is completed.
Further, when it is not determined that the images in the target image set are the target object, the details of the five sense organs of the images selected in the target image set can be corrected by using an adjusting tool. For example, the position of the eyebrows may be adjusted up or down, and for example, the size of the eyes may be adjusted, resulting in a final rendered image of the target object.
Further, after step S204, the method may further include retrieving data corresponding to the drawing image for a relevant person to analyze, so that the relevant person may further determine whether the drawing image is an image of the target object.
According to the portrait depicting method, the screened screening atlas is obtained from the image database by combining the feature description data, and then the screening atlas is further sorted, so that the relevance between the images in the screening atlas and the described feature description data is high and the images are arranged in a front position, and the accuracy of the obtained depicted images of the target objects is further improved.
In this embodiment, the image database includes: the multi-class image set comprises a plurality of groups of sub-image sets, and each group of sub-image sets is used for describing the type of a certain five-sense organ feature.
Specifically, each five sense organs can be divided into a plurality of types. For example, eyebrows can be divided into long, middle and short eyebrows.
In this embodiment, the screening atlas may be an image set obtained by screening in the image database; or the image set obtained by further screening by the user after the image database is primarily screened.
In this embodiment, as shown in fig. 3, the step S202 may include the following steps.
Step S2021, acquiring a sub-image set conforming to the feature description data from at least one type of image set according to the feature description data.
For example, the set of sub-images used to describe the long eyebrows contains images of all the long eyebrows in the image database. As another example, the sub-images used to describe a large eye collectively contain the images of all of the large eyes in the image data.
The intersection of every two sub-image sets may or may not be empty.
Step S2022, acquiring an image intersection in one or more sub-image sets to form a screening image set.
Further, on the basis of step S2022, the method may further include: the selection operation performed in the filtering atlas in step S2022 is received, and an image corresponding to the selection operation is acquired to form the filtering atlas. Specifically, the filtering atlas obtained in step S2022 may be displayed in a display interface, where the display interface may receive a click operation of a user.
Further, on the basis of step S2022, the method may further include: receiving the selection operation performed in the filtering atlas in step S2022, removing the image corresponding to the selection operation, and using the remaining images as the filtering atlas. Specifically, the filtering atlas obtained in step S2022 may be displayed in a display interface, where the display interface may receive a click operation of a user.
In this embodiment, as shown in fig. 4, the step S203 may include the following steps.
Step S2031, calculating the distance between the feature of the image in the screening graph set obtained according to the feature description data and the feature of the image corresponding to the feature description data.
And S2032, sorting the images in the screening image set according to the distance, and acquiring a designated number of images sorted in the front to form a target image set.
In this embodiment, the smaller the calculated distance, the higher the ranking may be. Further, the target image set includes a previously specified number of images.
In one embodiment, the step S2031 comprises: receiving a calibration operation performed in the screening atlas, wherein the calibration operation is used for calibrating one or more calibration images, which are the most similar to the target object, in the screening atlas, images corresponding to the feature description data include the one or more calibration images, and calculating distances between features of other images, except for the calibration images, in the screening atlas and features of the calibration images. Wherein the distance of the calibration image is set to zero. In this embodiment, the step S2031 and the step S2032 may be implemented by using a neural network model, and the calibration image and the screening atlas are input into the neural network model for calculation, and then the distance between the data and the features of the calibration image is relatively small.
In this embodiment, when a plurality of calibration images are calibrated, the feature distance between the feature of each of the calibration images and the feature of other images in the screening set except for the calibration images is calculated, each of the other images in the screening set except for the calibration images can obtain the feature distances with the same number as that of the calibration images, and the distances between the image and the calibration images can be obtained by calculating the average value of the feature distances, or the distances between the image and the calibration images obtained by calculating the feature distances are distributed according to a certain weight. For example, the plurality of calibration images are an image a, an image B and an image C, respectively, and the feature distances between the image D in the screening set and the images a, B and C are a, B and C, respectively. The distance between the image D and the plurality of calibration images can be expressed as (a + b + c)/3; the distance between the image D and the plurality of calibration images can also be represented as ka + pb + qc; wherein k + p + q is 1.
In another embodiment, the step S2031 comprises: and generating a simulated image according to the feature description data, and calculating the distance between the features of the images in the screening graph set and the features of the simulated image. In one embodiment, steps S2031 and S2032 may be implemented by using a neural network model, and the simulated image and the filtering atlas are input into the neural network model to be calculated, and then the distance between the data and the feature of the calibration image is the smallest.
By calculating the distance between the features of each image in the screening graph set and the features of the image corresponding to the feature description data, the matching degree between each image in the screening graph set and the feature description data can be identified, and the probability that the image with the higher matching degree is the image of the target object is higher, so that the accuracy of portrait description of the target object is effectively improved.
In one example, when the target image set is obtained, in addition to the similarity, other indexes may be added to enter into the ranking parameters, such as region, crime history, and the like. For example, if a victim is victimized in a city, the system should first find out the portrait of the house or the area near the city where recent activities are recorded, and if one of the persons has a history of crimes, the priority should be increased, and if the crimes that the person has committed are similar to or the same as the crimes that the victim has encountered this time, the priority should be further increased. The above is described in detail below.
In this embodiment, the step S203 may further include the following steps.
Step S2033, a target position where the target object appears is acquired.
In this embodiment, the target position may be a position of the target object when a certain event is executed.
In an application scenario, the target object is an executor who executes a certain behavior, and the target location may be location information that is input by receiving the certain behavior pressing object; or acquiring voice information generated by the certain behavior pressing object, wherein the voice information carries the target position.
Step S2034, images in the screening atlas are sorted according to the collection position of the images in the screening atlas, and a specified number of images sorted at the top are obtained to form a target image set, wherein the closer the collection position is to the target position, the closer the corresponding image sorting position is to the top.
Further, the method in this embodiment may include steps S2031, S2033 and steps of: and sorting the images in the screening image set according to the acquisition positions of the images in the screening image set and the distances between the features of the images in the screening image set obtained by the feature description data and the features of the images corresponding to the feature description data, and acquiring a specified number of images sorted in the front to form a target image set, wherein the closer the acquisition position is to the target position, the higher the priority of the corresponding image is.
The reliability of screening can be improved by sorting the images by screening in consideration of the position where the target object appears.
In this embodiment, the step S203 may further include the following steps.
Step S2035, behavior data of an object corresponding to the image in the screening graph set is obtained, and the behavior data includes the correlation degree of the attribute carried by the target object.
In this embodiment, the behavior data may be crime data and hero traces generated in an object history time period corresponding to the image in the screening graph set. When the behavioural data is crime data, the attribute may be a crime type, for example, it may be an entry robbery, a theft, an intentional injury to a person, or the like. When the behavioural data is a hero incident, the amount may be of the type of action, e.g. may be of the help of catching a thief, of the help of driving a bad person, etc.
In one example, when the target object is a criminal suspect, the attribute carried by the target object may be a specific suspected criminal type.
Step S2036, images in the screening image set are sorted according to behavior data of objects corresponding to the images in the screening image set, and a designated number of images sorted before are obtained to form a target image set, wherein the higher the degree of correlation carried by the behavior data is, the earlier the sorting position of the corresponding image is.
Further, the method in this embodiment may include steps S2031, S2033, S2035 and steps of: and sorting the images in the screening map set according to the behavior data of the object corresponding to the images in the screening map set, the acquisition positions of the images in the screening map set and the distance between the characteristics of the images in the screening map set and the characteristics of the images corresponding to the characteristic description data obtained by the characteristic description data, and acquiring the images sorted in the front specified number to form a target image set, wherein the closer the acquisition position is to the target position, the higher the priority of the corresponding image is.
Further, the method in this embodiment may include steps S2031, S2035 and steps: and sequencing the images in the screening map set according to the behavior data of the object corresponding to the images in the screening map set and the distance between the characteristics of the images in the screening map set and the characteristics of the images corresponding to the characteristic description data, and acquiring a specified number of images sequenced at the top to form a target image set, wherein the closer the acquisition position is to the target position, the higher the priority of the corresponding image is.
Further, the method in this embodiment may include steps S2033, S2035 and steps: and sorting the images in the screening atlas according to behavior data of objects corresponding to the images in the screening atlas and the acquisition positions of the images in the screening atlas, and acquiring a specified number of images sorted in the front to form a target image set, wherein the closer the acquisition position is to the target position, the higher the priority of the corresponding image is.
The images in the resulting target image set may be used to more closely approximate the target object by taking into account a number of factors into the ordering.
In this embodiment, the drawing image of the target object may be directly selected from the target image set obtained in step S203; at least one image may be selected from the target image set obtained in step S203, and the selected at least one image may be processed to obtain a drawing image of the target object.
In one embodiment, the step S204 may include the following steps.
Step S2041, displaying the target image set.
Step S2042 receives a selection operation in the target image set, and takes an image selected by the selection operation as a drawing image of a target object.
In another embodiment, the step S204 may include the following steps.
And step S2043, displaying the target image set.
Step S2044, receiving a selection operation in the target image set.
Step S2045 is performed to receive an adjustment operation for adjusting the image selected by the selection operation, and the adjusted image is used as the drawing image of the target object.
In this embodiment, the image may be inserted into third-party retouching software for retouching. The adjusting operation may include adjusting an eye portion in the image, and may also adjust an eyebrow portion.
In this embodiment, as shown in fig. 5, the method further includes the following steps.
Step S301, acquiring historical adjustment data corresponding to an adjustment operation for adjusting the image selected by the selection operation.
The historical adjustment data includes user modification operations on an image selected from the target set of images, such as adjustment of eyebrow length, adjustment of eyebrow height, adjustment of eye size, adjustment of mouth size, and adjustment of mouth shape.
And step S302, analyzing according to the historical adjustment data to obtain a historical adjustment trend.
Through the analysis of the historical data, the user can adjust the characteristics most frequently, and the adjustment trend can be obtained. The trend of the adjustment may be the most frequently adjusted feature by the user. For example, it may be that the eyes are most often adjusted smaller, the eyebrows are thickened, etc.
And S303, adjusting the trend according to the history, and correcting the trend to obtain the screening standard in the screening image set in the image database according to the feature description data.
Analyzing variables of interest in screening or modifying images through a user-determined rendered image of the target object and a user modification process. For example, whether a certain feature of the long phase is frequently present in a suspect, which parts the user most often modifies, which features of the five sense organs are most often unclear, and so on. Thereby adjusting the screening criteria of the next round of picture search algorithm. For example, when the eyebrows of a criminal suspect are longer for several consecutive determinations, the system recommends more portraits with longer eyebrows as being included in the screening set the next time the user forgets the eyebrows of the suspect. The feature description data does not include features of eyebrow positions, namely features of no positions of eyebrows, and the features can be input into data classification standards when the feature description data receives adjustment of the eyebrow positions of images by users in the user modification process.
In this embodiment, steps S301 to S303 may be executed in the background, and steps S201 to S204 may be executed only when the program is called. That is, steps S301 to S303, may be performed by a different process from steps S201 to S204.
In this embodiment, as shown in fig. 6, the method further includes the following steps.
Step S401, receiving a setting operation for the characteristic parameters of the five sense organs.
In this embodiment, the setting operation is used to set different types of definition criteria corresponding to each type of five sense organs, and each definition criteria forms a data structure.
For example, the eyebrows can correspond to three types, long, medium, and short, respectively. The width of the eyebrow is equal to that of the eye, the width of the eyebrow is short, the length of the eyebrow is 1.4-1.6 times longer than that of the eye, the length of the eyebrow is medium, and the two eyebrows are connected together in a concealed manner.
As another example, the eyes may be classified as large, medium, and small. The eye is "small" when the aspect ratio is greater than 5, "medium" when the aspect ratio is greater than 1.5-2, and "large" when the aspect ratio is greater than 1.
Of course, the above description is merely illustrative, and those skilled in the art can classify each type of five sense organs into various types according to actual needs or other data bases.
Step S402, storing each image in the existing image set into a data structure corresponding to the limit standard which is satisfied by each image.
In this embodiment, when any image satisfies a plurality of definition criteria, the image is stored in a data structure corresponding to the plurality of definition criteria.
In one embodiment, all facial images that match a feature of five sense organs may be stored in the Set using a data structure, such as a Hash table (Hash Map), with the feature as a key and the type of five sense organs (Set) as a value.
Further, if the portrait drawing method is used for drawing criminal suspects, when a certain suspects enters the prison, the image database can correspondingly delete the images of the persons; when a suspect is out of the prison, the image database can correspondingly add the image of the person.
The data module for counting five sense organs is responsible for counting the proportion and distribution of the size, shape and the like of each five sense organs, for example, classifying eyebrows by using artificially specified short, medium, long and dense eyebrows as standards.
According to the facial image classification module of the five sense organs, the shape of each five sense organs is taken as the standard of a classification image, the facial images containing the characteristics of the five sense organs are numbered and stored in the same set, for example, the facial images with short eyebrows are in a sub-image set; as another example, a face image with large eyes is in a sub-image set.
Further, the image data sets in the image database may also be derived in a defined format for further analysis.
In the method in the embodiment, the statistical data in the invention can also provide powerful data support for criminal analysis research.
In this embodiment, before step S201 is executed, distances between feature points of images in all image databases and images selected by the user may be calculated, and the calculation result may be stored as a tag in association with each image. For example, eyebrow length/length. When the image filtering device is used again, a user selects a new image, when the distance between the user and a new picture is needed to be calculated, a part of image can be filtered by using the label of each image, and then the filtered image and the user select the new image again. For example, if the user selects the face of the short eyebrow square face, the image of the long eyebrow round face can be directly deleted, thereby reducing the calculation amount.
EXAMPLE III
Corresponding to the portrait drawing method provided in the second embodiment, the present embodiment provides a portrait drawing device. The various modules in the portrait rendering device in this embodiment are used to perform the steps in the method in embodiment two. Fig. 7 shows a schematic structural diagram of a portrait rendering apparatus according to an embodiment of the present invention, and as shown in fig. 7, the apparatus includes the following modules.
A data obtaining module 501, configured to obtain feature description data of a target object to be depicted, where the feature description data includes descriptions of features of five sense organs of a face.
And the screening module 502 is configured to obtain a screening atlas in the image database according to the feature description data.
A sorting module 503, configured to sort and screen the relevance between the screening atlas and the feature description data to obtain a target image set.
A obtaining module 504, configured to obtain a rendering image of the target object according to the target image set.
According to the portrait drawing device provided by the embodiment of the invention, the screened screening atlas is obtained from the image database by combining the feature description data, and then the screening atlas is further sorted, so that the correlation degree between the images in the screening atlas and the described feature description data is high and the images are arranged in front, and the accuracy of the obtained drawing image of the target object is further improved.
The sorting module 503 is further configured to calculate a distance between a feature of an image in the screening image set obtained according to the feature description data and a feature of an image corresponding to the feature description data, sort the images in the screening image set according to the distance, and obtain a target image set formed by images sorted in a previously specified number.
In this embodiment, the image database includes: each type of image set is used for embodying a certain five-sense organ feature of the human face, each type of image set comprises a plurality of groups of sub-image sets, and each sub-image set is used for describing the type of the certain five-sense organ feature; the screening module 502 is further configured to obtain sub-image sets that conform to the feature description data from at least one type of image sets according to the feature description data obtaining, and obtain an image intersection in one or more sub-image sets to form a screening atlas.
In this embodiment, the image database includes: each type of image set is used for embodying a certain five-sense organ feature of the human face, each type of image set comprises a plurality of groups of sub-image sets, and each sub-image set is used for describing the type of the certain five-sense organ feature; the screening module 502 is further configured to obtain a sub-image set that meets feature description data from at least one type of image set according to the feature description data, obtain an image intersection in one or more sub-image sets to form a preliminary screening atlas, receive a selection operation performed in the preliminary screening atlas, and obtain a corresponding image according to the selection operation to form a screening atlas.
In this embodiment, the sorting module 503 is further configured to obtain a target position where the target object appears, sort the images in the screening atlas according to a collection position of the image in the screening atlas, and obtain a specified number of images sorted at the top to form a target image set, where the closer the collection position is to the target position, the closer the corresponding image sorting position is to the top.
In this embodiment, the sorting module 503 is further configured to obtain behavior data of an object corresponding to an image in the screening graph set, where the behavior data includes a correlation degree of an attribute carried by the target object, sort the images in the screening graph set according to the behavior data of the object corresponding to the image in the screening graph set, and obtain a specified number of images sorted before to form the target image set, where a larger correlation degree carried by the behavior data leads to a higher sorting position of the corresponding image.
The sorting module 503 is further configured to receive a calibration operation performed in the screening atlas, where the calibration operation is used to calibrate one or more calibration images in the screening atlas that are most similar to the target object, and the image corresponding to the feature description data includes the one or more calibration images; and calculating the distances between the features of the images in the screening image set except the calibration image and the features of the calibration image, wherein the distances of the calibration image are set to be zero.
The sorting module 503 is further configured to generate a simulated image according to the feature description data, and calculate a distance between a feature of an image in the screening graph set and a feature of the simulated image.
In this embodiment, the obtaining module 504 is further configured to display the target image set, receive a selection operation in the target image set, and use an image selected by the selection operation as a drawing image of a target object.
In this embodiment, the obtaining module 504 is further configured to display the target image set, receive a selection operation in the target image set, receive an adjustment operation for adjusting an image selected by the selection operation, and use the adjusted image as a drawing image of a target object.
In this embodiment, as shown in fig. 8, the portrait drawing apparatus further includes:
an operation obtaining module 601, configured to obtain historical adjustment data corresponding to an adjustment operation for adjusting the image selected by the selection operation;
the analysis module 602 analyzes the historical adjustment data to obtain a historical adjustment trend;
and the correcting module 603 is used for correcting the history adjusting trend and obtaining the screening standard in the screening image set in the image database according to the characteristic description data.
In this embodiment, the obtaining module 501 is further configured to receive a selection operation of features respectively selected from one or more types of feature groups, and form feature description data according to a result corresponding to the selection operation, where each type of feature group is used to describe one type of feature of an object; or the image data acquisition device is used for acquiring the image data acquired by the acquisition device and analyzing the image data to obtain the feature description data.
In this embodiment, as shown in fig. 8, the portrait drawing apparatus further includes:
a setting module 604, configured to receive a setting operation on feature parameters of the five sense organs, where the setting operation is used to set different types of definition standards corresponding to each type of the five sense organs, and each definition standard forms a data structure;
the storing module 605 stores each image in the existing image set in the data structure corresponding to the defined criterion that it satisfies.
When any image meets a plurality of limiting standards, the image is stored in a data structure corresponding to the limiting standards respectively.
For other details of the apparatus in this embodiment, further reference may be made to the description in the above method embodiment, which is not described herein again.
Furthermore, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor executes the computer program to implement the steps of the method provided by the foregoing method embodiment.
Further, an embodiment of the present invention further provides a computer program product of a portrait rendering method and apparatus, including a computer-readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A portrait rendering method, comprising:
acquiring feature description data of a target object to be depicted, wherein the feature description data comprises description of facial features, receiving selection operation for respectively selecting features in one or more feature groups, and forming feature description data according to a result corresponding to the selection operation; each facial feature is divided into a plurality of types, and the description of the facial feature is used for characterizing the type of one or more facial features;
obtaining a screening atlas in an image database according to the feature description data; the image database includes: each type of image set is used for embodying certain five-sense organ characteristics of the human face, each type of image set comprises a plurality of groups of sub-image sets, and each sub-image set is used for describing the type of certain five-sense organ characteristics;
sorting and screening the relevance of the screening atlas and the feature description data to obtain a target image set; obtaining a description image of the target object according to the target image set;
wherein the step of obtaining a screening atlas in an image database according to the feature description data comprises:
acquiring a sub-image set which accords with the feature description data from at least one type of image set according to the feature description data;
acquiring image intersections in one or more sub-image sets to form a screening image set;
when the feature description data comprises description of features of partial five sense organs, adding feature recommendation images of objects with the same attributes carried by the target object in historical data into the screening graph set;
wherein the step of rank-screening the relevance size of the screening atlas and the feature description data to obtain a target image set comprises:
calculating the distance between the feature of the image in the screening graph set obtained according to the feature description data and the feature of the image corresponding to the feature description data;
and sorting the images in the screening image set according to the distance to obtain a designated number of images sorted in the front to form a target image set.
2. The portrait rendering method of claim 1, wherein the step of calculating distances from the feature description data to features of images in the screening graph set corresponding to the feature description data comprises:
receiving a calibration operation performed in the screening atlas, wherein the calibration operation is used for calibrating one or more calibration images in the screening atlas which are most similar to the target object, and the image corresponding to the feature description data comprises the one or more calibration images;
and calculating the distances between the features of the images in the screening image set except the calibration image and the features of the calibration image, wherein the distances of the calibration image are set to be zero.
3. The portrait rendering method of claim 1, wherein the step of calculating distances from the feature description data to features of images in the screening graph set corresponding to the feature description data comprises:
generating a simulated image according to the feature description data;
calculating distances between features of the images in the screening set and features of the simulated images.
4. The portrait rendering method of claim 1, wherein the step of rank-screening the magnitude of the association of the screening atlas with the feature description data to obtain a set of target images comprises:
acquiring a target position where the target object appears;
and sorting the images in the screening atlas according to the collection positions of the images in the screening atlas and/or the distances between the features of the images in the screening atlas and the features of the images corresponding to the feature description data, and acquiring a specified number of images sorted in the front to form a target image set, wherein the closer the collection position is to the target position, the closer the corresponding image sorting position is.
5. The portrait rendering method of claim 4, wherein the step of rank-screening the magnitude of the association of the screening atlas with the feature description data to obtain a set of target images comprises:
acquiring behavior data of an object corresponding to the image in the screening graph set, wherein the behavior data comprises the correlation degree of the attribute carried by the target object;
and sequencing the images in the screening map set according to one or two of behavior data of an object corresponding to the images in the screening map set and/or the distances between the acquisition positions of the images and the distances between the features of the images in the screening map set and the features of the images corresponding to the feature description data, and acquiring images with a specified number in the front to form a target image set, wherein the higher the correlation carried by the behavior data is, the higher the sequencing position of the corresponding image is.
6. The portrait rendering method of claim 1, wherein the step of obtaining an intersection of images in the one or more sub-image sets to form a screening atlas further comprises:
and receiving selection operation performed in the screening atlas, and acquiring a corresponding image according to the selection operation to form the screening atlas.
7. The portrait rendering method of any of claims 1-5, wherein the step of deriving a rendered image of the target object from the set of target images comprises:
displaying the target image set;
and receiving a selection operation in the target image set, and taking an image selected by the selection operation as a drawing image of a target object.
8. The portrait rendering method of any of claims 1-5, wherein the step of deriving a rendered image of the target object from the set of target images comprises:
displaying the target image set;
receiving a selection operation in the target image set;
and receiving an adjustment operation for adjusting the image selected by the selection operation, and taking the adjusted image as a drawing image of a target object.
9. The portrait rendering method of claim 8, further comprising:
acquiring historical adjustment data corresponding to adjustment operation for adjusting the image selected by the selection operation;
analyzing according to the historical adjustment data to obtain a historical adjustment movement;
and modifying according to the historical adjustment trend for executing the step to obtain the screening standard in the screening image set in the image database according to the feature description data.
10. The portrait rendering method of claim 1, wherein the step of obtaining characterization data of a target object to be rendered, comprises:
receiving selection operation of features respectively selected from one or more types of feature groups, and forming feature description data according to a result corresponding to the selection operation, wherein each type of feature group is used for describing one type of feature of an object; or the like, or, alternatively,
and acquiring image data acquired by acquisition equipment, and analyzing the image data to obtain feature description data.
11. The portrait rendering method of claim 1, wherein prior to the step of deriving a screening atlas in an image database from the characterization data, the method further comprises:
receiving setting operation of characteristic parameters of the five sense organs, wherein the setting operation is used for setting different types of definition standards corresponding to each type of the five sense organs, and each definition standard forms a data structure;
and storing each image in the existing image set into a data structure corresponding to the definition standard which is met by the image set, wherein when any image meets a plurality of definition standards, the image is respectively stored in the data structures corresponding to the definition standards.
12. A portrait depicting apparatus, comprising:
the data acquisition module is used for acquiring feature description data of a target object to be depicted, wherein the feature description data comprises descriptions of facial features, wherein selection operations of respectively selecting the features in one or more feature groups are received, and feature description data are formed according to results corresponding to the selection operations; each facial feature is divided into a plurality of types, and the description of the facial feature is used for characterizing the type of one or more facial features;
the screening module is used for obtaining a screening atlas in an image database according to the feature description data; the image database includes: each type of image set is used for embodying certain five-sense organ characteristics of the human face, each type of image set comprises a plurality of groups of sub-image sets, and each sub-image set is used for describing the type of certain five-sense organ characteristics;
the sorting module is used for sorting and screening the relevance between the screening atlas and the feature description data to obtain a target image set;
an obtaining module, configured to obtain a description image of the target object according to the target image set;
wherein, the screening module is specifically configured to:
acquiring a sub-image set which accords with the feature description data from at least one type of image set according to the feature description data;
acquiring image intersections in one or more sub-image sets to form a screening image set;
when the feature description data comprises description of features of partial five sense organs, adding feature recommendation images of objects with the same attributes carried by the target object in historical data into the screening graph set;
wherein the sorting module is specifically configured to:
calculating the distance between the feature of the image in the screening graph set obtained according to the feature description data and the feature of the image corresponding to the feature description data;
and sorting the images in the screening image set according to the distance to obtain a designated number of images sorted in the front to form a target image set.
13. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method according to any of claims 1-11 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
CN201810820310.6A 2018-07-24 2018-07-24 Portrait drawing method and device and electronic equipment Active CN109034055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810820310.6A CN109034055B (en) 2018-07-24 2018-07-24 Portrait drawing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810820310.6A CN109034055B (en) 2018-07-24 2018-07-24 Portrait drawing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109034055A CN109034055A (en) 2018-12-18
CN109034055B true CN109034055B (en) 2021-10-01

Family

ID=64645699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810820310.6A Active CN109034055B (en) 2018-07-24 2018-07-24 Portrait drawing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109034055B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063417A (en) * 2013-03-21 2014-09-24 株式会社东芝 Picture Drawing Support Apparatus, Method And Program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847144A (en) * 2009-03-27 2010-09-29 上海薇艾信息科技有限公司 Portrait processing method for Internet dating
CN103678394A (en) * 2012-09-21 2014-03-26 孟露芳 Image matching degree based marriage dating recommendation method and system
CN103824051B (en) * 2014-02-17 2017-05-03 北京旷视科技有限公司 Local region matching-based face search method
CN106339428B (en) * 2016-08-16 2019-08-23 东方网力科技股份有限公司 Suspect's personal identification method and device based on video big data
CN107967458A (en) * 2017-12-06 2018-04-27 宁波亿拍客网络科技有限公司 A kind of face identification method
CN108182232B (en) * 2017-12-27 2018-10-23 掌阅科技股份有限公司 Personage's methods of exhibiting, electronic equipment and computer storage media based on e-book

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063417A (en) * 2013-03-21 2014-09-24 株式会社东芝 Picture Drawing Support Apparatus, Method And Program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
智能人脸模拟画像技术的进展;卜凡亮等;《中国人民公安大学学报(自然科学版)》;20170515(第2期);参见第1、3节 *

Also Published As

Publication number Publication date
CN109034055A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
US11567989B2 (en) Media unit retrieval and related processes
US10217027B2 (en) Recognition training apparatus, recognition training method, and storage medium
US8510252B1 (en) Classification of inappropriate video content using multi-scale features
CN109657533A (en) Pedestrian recognition methods and Related product again
CN109635149B (en) Character searching method and device and electronic equipment
CN110414550B (en) Training method, device and system of face recognition model and computer readable medium
WO2016139964A1 (en) Region-of-interest extraction device and region-of-interest extraction method
KR20170131924A (en) Method, apparatus and computer program for searching image
US20140233811A1 (en) Summarizing a photo album
CN111666976A (en) Feature fusion method and device based on attribute information and storage medium
CN109241316B (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN109034055B (en) Portrait drawing method and device and electronic equipment
CN114359783A (en) Abnormal event detection method, device and equipment
KR100827845B1 (en) Apparatus and method for providing person tag
CN115935049A (en) Recommendation processing method and device based on artificial intelligence and electronic equipment
WO2014186392A2 (en) Summarizing a photo album
CN111125545A (en) Target object determination method and device and electronic equipment
EP3139282A1 (en) Media unit retrieval and related processes
US20230177880A1 (en) Device and method for inferring interaction relathionship between objects through image recognition
US11250271B1 (en) Cross-video object tracking
EP3139281A1 (en) Media unit retrieval and related processes
EP3139284A1 (en) Media unit retrieval and related processes
EP3139279A1 (en) Media unit retrieval and related processes
CN109815359B (en) Image retrieval method and related product
US20210327232A1 (en) Apparatus and a method for adaptively managing event-related data in a control room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant