CN111897986A - Image selection method and device, storage medium and terminal - Google Patents

Image selection method and device, storage medium and terminal Download PDF

Info

Publication number
CN111897986A
CN111897986A CN202010604116.1A CN202010604116A CN111897986A CN 111897986 A CN111897986 A CN 111897986A CN 202010604116 A CN202010604116 A CN 202010604116A CN 111897986 A CN111897986 A CN 111897986A
Authority
CN
China
Prior art keywords
image
preprocessing
library
images
designated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010604116.1A
Other languages
Chinese (zh)
Inventor
贾川民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202010604116.1A priority Critical patent/CN111897986A/en
Publication of CN111897986A publication Critical patent/CN111897986A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image selection method, an image selection device, a storage medium and a terminal, wherein the method comprises the following steps: identifying each second image, the corresponding image category and the mapping relation between each second image and the corresponding image category in the second image library to obtain corresponding identification information; in response to a first selection instruction of a user for selecting an image of a designated category, at least one designated image is selected from the second image library according to the identification information and is used as the designated image selected by the user.

Description

Image selection method and device, storage medium and terminal
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image selection method, an image selection apparatus, a storage medium, and a terminal.
Background
A large amount of image data can be acquired based on the wide use of image capturing devices such as image capturing devices, for example, cameras, in various places, for example, in the home, office, market, and the like.
Meanwhile, in a scene that a large number of image acquisition devices such as cameras are deployed at multiple points, a user acquires a large amount of image data, and if the user wants to select an image of a certain specified category from the large amount of image data, manual selection is needed, so that the process of selecting the image is too complicated, personal preference tendency of the user is often introduced, and uncertainty exists.
Alternatively, an image is randomly selected from the large amount of image data, and thus the selected image is not an image of a category designated by the user, and thus the user cannot quickly and intelligently select any image of the designated category from the large amount of image data.
Disclosure of Invention
The embodiment of the application provides an image selection method, an image selection device, a storage medium and a terminal. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides an image selecting method, where the method includes:
constructing a first image library by a plurality of first images acquired by a plurality of image acquisition devices;
preprocessing each first image in the first image library according to a preprocessing model to obtain a corresponding second image, and constructing a second image library by a plurality of second images;
classifying each second image in the second image library according to a preset neural network model for classifying the images to obtain a corresponding image category;
identifying each second image, the corresponding image category and the mapping relation between each second image and the corresponding image category in the second image library to obtain corresponding identification information;
and responding to a first selection instruction of a user for selecting the images of the designated category, and selecting at least one designated image from the second image library according to the identification information to be used as the designated image selected by the user.
In a second aspect, an embodiment of the present application provides an image selecting apparatus, including:
the first image library construction module is used for constructing a first image library by a plurality of first images acquired by a plurality of image acquisition devices;
the preprocessing module is used for preprocessing each first image in the first image library according to a preprocessing model to obtain a corresponding second image;
the second image library construction module is used for forming a second image library by a plurality of second images obtained by the preprocessing module through preprocessing;
the image classification module is used for classifying each second image in the second image library according to a preset neural network model for classifying the images to obtain a corresponding image category;
the identification module is used for identifying each second image in the second image library, the corresponding image category and the mapping relation between each second image and the corresponding image category to obtain corresponding identification information;
and the image selecting module is used for responding to a first selecting instruction of a user for selecting the images of the appointed types, and selecting at least one appointed image from the second image library as the appointed image selected by the user according to the identification information.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, each second image, the corresponding image category and the mapping relationship between each second image and the corresponding image category in the second image library are identified to obtain corresponding identification information; and responding to a first selection instruction of the user for selecting the images of the designated category, and selecting at least one designated image from the second image library as the designated image selected by the user according to the identification information. Because the identification information capable of identifying each second image, the corresponding image category and the mapping relation between each second image and the corresponding image category in the second image library is introduced, the selected specified images can be accurately indexed according to the identification information, and at least one specified image is quickly and intelligently selected from the second image library to serve as the specified image selected by the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flowchart of an image selecting method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image selecting apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Until now, the existing image selection method for the designated category is not to select the image manually, so that the image selection process is too complicated, time-consuming and labor-consuming; if the images are not selected randomly, the selected images are not the images of the kind designated by the user, and the accuracy of image selection is low. Therefore, the present application provides an image selecting method, an image selecting device, a storage medium, and a terminal, so as to solve the problems in the related art. In the technical scheme provided by the application, each second image in the second image library, the corresponding image category and the mapping relation between each second image and the corresponding image category are identified to obtain corresponding identification information; and responding to a first selection instruction of the user for selecting the images of the designated category, and selecting at least one designated image from the second image library as the designated image selected by the user according to the identification information. Because the identification information capable of identifying each second image, the corresponding image category and the mapping relationship between each second image and the corresponding image category in the second image library is introduced, the selected specified image can be accurately indexed according to the identification information, at least one specified image is quickly and intelligently selected from the second image library to serve as the specified image selected by the user, and the following description is given in detail by adopting an exemplary embodiment.
The image selecting method provided by the embodiment of the present application will be described in detail below with reference to fig. 1. The method may be implemented in dependence on a computer program, and may be run on an image selection device. The computer program may be integrated into the application or may run as a separate tool-like application. The image selecting device in the embodiment of the present application may be a user terminal, including but not limited to: personal computers, tablet computers, handheld devices, in-vehicle devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and the like. The user terminals may be called different names in different networks, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), terminal equipment in a 5G network or future evolution network, and the like.
Referring to fig. 1, a flow chart of an image selecting method according to an embodiment of the present application is shown. As shown in fig. 1, the image selecting method according to the embodiment of the present application may include the following steps:
s101, a first image library is constructed by a plurality of first images acquired by a plurality of image acquisition devices.
In this step, the plurality of first images are acquired by a plurality of image acquisition devices respectively arranged in a preset area, and there is no overlapping acquisition area between the plurality of image acquisition devices.
In an actual application scenario, the image capturing device may be a camera with a shooting function.
For example, in a specific application scenario, twenty cameras are arranged in a cell, so that the twenty cameras arranged in the cell can acquire a plurality of first images, and there is no overlapping acquisition area between the twenty cameras, so that the phenomenon of repeatedly acquiring images between any first image and other first images in the first image library is avoided.
In this step, each first image in the first image library is an original image acquired by the image acquisition device, and is an image without any image processing.
S102, preprocessing each first image in the first image library according to the preprocessing model to obtain a corresponding second image, and constructing a second image library by the plurality of second images.
In a possible implementation manner, the preprocessing model includes a first preprocessing model capable of highlighting at least one key information element in the set of key information elements, and the preprocessing each first image in the first image library according to the preprocessing model to obtain a corresponding second image includes the following steps:
and according to the first preprocessing model, performing first preprocessing on each first image in the first image library to obtain a corresponding second image with at least one key information element highlighted, wherein the first preprocessing is preprocessing for highlighting the at least one key information element.
In this step, the at least one key information element includes at least one of:
a feature information element of the subject object in the first image, a facial expression information element of the subject object in the first image, and an accessory information element of the subject object in the first image.
In addition to the above-mentioned key information elements, other key information elements may be used, and the meaning of the key information elements is not particularly limited.
In a specific scene, when the first image is a picture of a white california, one key information element may be: and highlighting the hair characteristic information element corresponding to the white hair of the white Garfield cat, and performing filter processing on the background color of the picture through a first preprocessing process for highlighting the hair characteristic information element to obtain a corresponding second image.
In the second image, the background color blue background color clearly corresponds to the white hair of the cauda in the picture, and highlights the snow white hair of the cauda.
The first preprocessing process described above is merely an example, and is not described herein again. The first preprocessing process corresponding to the first preprocessing model can be adjusted according to the requirements of different specific application scenarios, which is not described herein again.
It should be noted that the first preprocessing model is a model established according to a conventional model building method, and details of the model building method are not described herein. A general method of building a model includes a training set and a testing set. And constructing an initial training model according to the image data in the training set, testing the initial training model according to the image data in the testing set, and continuously correcting to obtain a corrected training model.
In one possible implementation, before preprocessing each first image in the first image library according to the first preprocessing model, the method further includes the following steps:
reading at least one key information element;
wherein the key information elements include at least one of:
a feature information element of the subject object in the first image, a facial expression information element of the subject object in the first image, and an accessory information element of the subject object in the first image.
In addition to the above-mentioned key information elements, other key information elements may be used, and the meaning of the key information elements is not particularly limited. For the description of the key information elements, please refer to the foregoing description, which is not repeated herein.
In another possible implementation manner, the preprocessing model includes a second preprocessing model capable of removing at least one unrelated background object and/or background person, and the preprocessing each first image in the first image library according to the preprocessing model to obtain a corresponding second image further includes the following steps:
and according to the second preprocessing model, carrying out second preprocessing on each first image in the first image library to obtain a corresponding second image with at least one background object and/or background person removed, wherein the second preprocessing is preprocessing for removing at least one unrelated background object and/or background person.
In a specific application scenario, when the current first image includes at least one unrelated background object and/or background person, for example, when the current first image includes the unrelated background object and a cup, second preprocessing is performed on the current first image according to the second preprocessing model, the cup is removed, and the current first image is replaced with a complete background picture selected by a user, so that a second image with the unrelated background object (the cup) removed is finally obtained.
The above only illustrates an application scene in which the water cup is an unrelated background object in a certain application scene. In other application scenarios, the unrelated background object may also be a flower, or a background object such as a fan that is unrelated to the main object in the first image, and will not be described herein again.
It should be noted that the second preprocessing model is a model established according to a conventional model building method, and details of the model building method are not described herein. A general method of building a model includes a training set and a testing set. And constructing an initial training model according to the image data in the training set, testing the initial training model according to the image data in the testing set, and continuously correcting to obtain a corrected training model.
The above only illustrates two kinds of preprocessing processes, and in addition to the two listed preprocessing processes, other preprocessing processes may be adopted, which is not described in detail herein.
S103, classifying each second image in the second image library according to a preset neural network model for classifying the images to obtain a corresponding image category.
In this step, the preset neural network model is a neural network model constructed based on the VGG model, the classification accuracy of the model is more accurate, and the images can be accurately classified from the second image library, for example, which images are all images belonging to the class of automobiles and which images are all images belonging to the class of pet dogs.
Because the number of network layers of the VGG model is more and the convolution kernel is smaller, more image features can be extracted, and the accuracy rate of image classification is further improved. However, because the complexity of the VGG model network is high, the training process for constructing the model is also longer, and in addition, the requirements on the performance of computer hardware are also higher.
In a specific application scenario, different preset neural network models can be selected according to the number of second images in the second image library. For example, in a case where the number of second images in the second image library is not large and the recognition accuracy of the image classification is required to be high, the VGG model described above may be selected.
In practical applications, the VGG model has the following advantages:
a small convolution kernel; the convolution kernel is replaced by 3x3 (rarely using 1x 1);
pooling nuclei; compared with the 3x3 pooling nucleus of AlexNet, VGGs are all 2x2 pooling nuclei;
the characteristic diagram with deeper layer number is wider; because the convolution kernel is focused on expanding the number of channels and pooling is focused on narrowing the width and height, the increase of the calculated amount is slowed down while the model architecture is deeper and wider;
converting full connection into convolution; in the network testing stage, three full connections in the training stage are replaced by three convolutions, and the testing reuses parameters in the training process, so that the full convolution network obtained by testing has no limitation of full connections, and thus, can receive input with any width or height.
In another specific application scenario, under the conditions that the number of second images in the second image library is large and the requirement on the identification accuracy of each image is not high, an optimized convolutional neural network can be selected. For example, a picture of the convolutional neural network input layer is set to be a picture with a preset size, feature extraction is performed on the picture through the convolutional layer, and the image dimensionality is reduced through the pooling layer. Here, the first preset size of the picture is not particularly limited. The input of the layer 2 convolution is the output of the layer 1 convolution, and the size is a second preset size, and here, the second preset size is not particularly limited. The 3-5 layers of the convolution layer have the same structure, the link mode of the convolution layer and the subsequent pooling layer is not adopted any more, and the full convolution mode is adopted, so that the image characteristics are extracted as much as possible. Before the data is transmitted into the full connection layer, the data obtained by the previous layer is flattened, and the multidimensional input is one-dimensional, so that the link effect of the full connection layer is better. The number of ganglion points of the output layer is determined according to actual classification requirements. In addition, the classification process is also continuously optimized, and the method for optimizing the classification process is a conventional method, which is not described herein again.
And S104, identifying each second image in the second image library, the corresponding image category and the mapping relation between each second image and the corresponding image category to obtain corresponding identification information.
In this step, the identification information may identify, in addition to the above information, time information of image acquisition by the image acquisition device in order to realize accurate positioning of each second image; the time information may be embodied as: a certain time of a certain day.
For another example, the address information for image capturing by the image capturing device is identified: the address information may be specific to: in a building of a cell.
And S105, responding to a first selection instruction of the user for selecting the images of the designated category, and selecting at least one designated image from the second image library according to the identification information to be used as the designated image selected by the user.
In practical applications, the selected designated image may be one or multiple designated images, and the number of the designated images is not particularly limited.
In one possible implementation, after at least one designated image is selected from the second image library according to the identification information, the method further includes the steps of:
and responding to a second selection instruction of the user for selecting the appointed display equipment, and displaying the selected at least one appointed image on the corresponding display equipment, wherein the second selection instruction carries the MAC address information of the appointed display equipment.
In this step, the precise positioning of the specified display device can be realized through the MAC address information of the display device.
In one possible implementation manner, the step of displaying the selected at least one designated image on the corresponding display device includes the following steps:
under the condition that the number of the designated images is two or more, respectively calculating the weight value of each designated image;
sequencing each appointed image according to the image weight value of each appointed image;
and displaying each appointed image on the corresponding position of the appointed display equipment according to the corresponding relation between the image sequence and the display position of the image on the appointed display equipment.
In practical application, when two or more designated images are selected, the method for calculating the weight value of each designated image is a conventional method, and is not described herein again.
After the weight values of the respective designated images are obtained, the weight values of the respective designated images are sorted. In practical applications, a first designated image having a largest weight value of the designated images is arranged at the forefront, and the designated image corresponding to the weight value is displayed in a central area of the designated display device, while a second designated image having a smallest weight value is displayed in a boundary area of the designated display device, for example, an upper boundary, or a lower boundary, or a left boundary, or a right boundary. Here, only one display method is illustrated, and other display manners may also be provided, for example, a first designated image with a largest weight value is displayed on the uppermost layer of the designated display device, and a second designated image with a smallest weight value is displayed on the next layer of the first designated image, so as to avoid mutual occlusion between images, transparency of a display layer of each image may be set, for example, transparency is set to fifty percent, and a display method may also be modified according to different application scenarios, which is not described herein again.
In the embodiment of the application, each second image, the corresponding image category and the mapping relationship between each second image and the corresponding image category in the second image library are identified to obtain corresponding identification information; and responding to a first selection instruction of a user for selecting the images of the designated categories, and selecting at least one designated image from the second image library according to the identification information to serve as the designated image selected by the user. Therefore, the selected designated images can be accurately indexed according to the identification information, and at least one designated image can be quickly and intelligently selected from the second image library to serve as the designated image selected by the user.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Referring to fig. 2, a schematic structural diagram of an image selecting apparatus according to an exemplary embodiment of the invention is shown. The image selecting device can be realized by software, hardware or a combination of the two to form all or part of the terminal. The image selecting device comprises a first image library construction module 10, a preprocessing module 20, a second image library construction module 30, an image classification module 40, an identification module 50 and an image selecting module 60.
Specifically, the first image library constructing module 10 is configured to construct a first image library by using a plurality of first images acquired by a plurality of image acquiring devices;
the preprocessing module 20 is configured to preprocess each first image in the first image library according to the preprocessing model to obtain a corresponding second image;
a second image library construction module 30, configured to construct a second image library from a plurality of second images obtained through preprocessing by the preprocessing module 20;
the image classification module 40 is configured to classify each second image in the second image library according to a preset neural network model for classifying the images, so as to obtain a corresponding image category;
the identification module 50 is configured to identify each second image in the second image library, a corresponding image category, and a mapping relationship between each second image and the corresponding image category, so as to obtain corresponding identification information;
and the image selecting module 60 is configured to, in response to a first selection instruction of the user for selecting an image of a specific category, select at least one specific image from the second image library according to the identification information, and use the selected specific image as the specific image selected by the user.
Optionally, the preprocessing model includes a first preprocessing model capable of highlighting at least one key information element in the set of key information elements, and the preprocessing module 20 is specifically configured to:
and according to the first preprocessing model, performing first preprocessing on each first image in the first image library to obtain a corresponding second image with at least one key information element highlighted, wherein the first preprocessing is preprocessing for highlighting the at least one key information element.
Optionally, the apparatus further comprises:
a reading module (not shown in fig. 2) for reading at least one key information element before the preprocessing module 20 preprocesses each first image in the first image library according to the first preprocessing model; the key information elements read by the reading module at least include one of the following items: a feature information element of the subject object in the first image, a facial expression information element of the subject object in the first image, and an accessory information element of the subject object in the first image.
Optionally, the preprocessing model includes a second preprocessing model capable of removing at least one unrelated background object and/or background person, and the preprocessing module 20 is further specifically configured to:
and according to the second preprocessing model, carrying out second preprocessing on each first image in the first image library to obtain a corresponding second image with at least one background object and/or background person removed, wherein the second preprocessing is preprocessing for removing at least one unrelated background object and/or background person.
Optionally, the apparatus further comprises:
and a display device, configured to, after the image selection module 60 selects at least one designated image from the second image library according to the identification information, respond to a second selection instruction of the user for selecting a designated display device, and display the selected at least one designated image on the corresponding display device, where the second selection instruction carries the MAC address information of the designated display device.
Optionally, the display module is specifically configured to:
under the condition that the number of the designated images is two or more, respectively calculating the weight value of each designated image;
sequencing each appointed image according to the image weight value of each appointed image;
and displaying each appointed image on the corresponding position of the appointed display equipment according to the corresponding relation between the image sequence and the display position of the image on the appointed display equipment.
Optionally, the plurality of first images are acquired by a plurality of image acquisition devices respectively arranged in the preset area, and there is no overlapping acquisition area between the plurality of image acquisition devices.
It should be noted that, when the image selecting apparatus provided in the foregoing embodiment executes the image selecting method, only the division of the functional modules is taken as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the image selecting apparatus and the image selecting method provided by the above embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments and are not described herein again.
In the embodiment of the application, the identification module identifies each second image, the corresponding image category and the mapping relationship between each second image and the corresponding image category in the second image library to obtain corresponding identification information; and the image selection module responds to a first selection instruction of the user for selecting the images of the appointed categories, and selects at least one appointed image from the second image library as the appointed image selected by the user according to the identification information. Because the identification information capable of identifying each second image, the corresponding image category and the mapping relation between each second image and the corresponding image category in the second image library is introduced, the selected specified images can be accurately indexed according to the identification information, and at least one specified image is quickly and intelligently selected from the second image library to serve as the specified image selected by the user.
The present invention also provides a computer readable medium having stored thereon program instructions that, when executed by a processor, implement the image selection method provided by the various method embodiments described above.
The present invention also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image selection method described in the above-mentioned method embodiments.
Please refer to fig. 3, which provides a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 3, the terminal 1000 can include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 interfaces various components throughout the electronic device 1000 using various interfaces and lines to perform various functions of the electronic device 1000 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 3, the memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image selection application program.
In the terminal 1000 shown in fig. 3, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the processor 1001 may be configured to call the image selecting application stored in the memory 1005, and specifically perform the following operations:
constructing a first image library by a plurality of first images acquired by a plurality of image acquisition devices;
preprocessing each first image in the first image library according to the preprocessing model to obtain a corresponding second image, and constructing a second image library by using a plurality of second images;
classifying each second image in the second image library according to a preset neural network model for classifying the images to obtain corresponding image classes;
identifying each second image, the corresponding image category and the mapping relation between each second image and the corresponding image category in the second image library to obtain corresponding identification information;
and responding to a first selection instruction of the user for selecting the images of the designated category, and selecting at least one designated image from the second image library as the designated image selected by the user according to the identification information.
In an embodiment, the preprocessing model includes a first preprocessing model capable of highlighting at least one key information element in the set of key information elements, and the processor 1001, when performing the preprocessing on each first image in the first image library according to the preprocessing model to obtain a corresponding second image, specifically performs the following operations:
and according to the first preprocessing model, performing first preprocessing on each first image in the first image library to obtain a corresponding second image with at least one key information element highlighted, wherein the first preprocessing is preprocessing for highlighting the at least one key information element.
In one embodiment, the processor 1001 further performs the following operations before performing the pre-processing on each first image in the first image library according to the first pre-processing model:
reading at least one key information element;
wherein the key information elements include at least one of:
a feature information element of the subject object in the first image, a facial expression information element of the subject object in the first image, and an accessory information element of the subject object in the first image.
In an embodiment, the preprocessing model includes a second preprocessing model capable of removing at least one unrelated background object, and when the processor 1001 performs the preprocessing on each first image in the first image library according to the preprocessing model to obtain a corresponding second image, the following operations are further specifically performed:
and according to a second preprocessing model, carrying out second preprocessing on each first image in the first image library to obtain a corresponding second image with at least one background object removed, wherein the second preprocessing is preprocessing for removing at least one unrelated background object.
In one embodiment, the processor 1001 further performs the following operations after the selecting of the at least one designated image from the second image library according to the identification information:
and responding to a second selection instruction of the user for selecting the appointed display equipment, and displaying the selected at least one appointed image on the corresponding display equipment, wherein the second selection instruction carries the MAC address information of the appointed display equipment.
In one embodiment, when the processor 1001 executes the displaying of the selected at least one designated image on the corresponding display device, the following operations are specifically executed:
under the condition that the number of the designated images is two or more, respectively calculating the weight value of each designated image;
sequencing each appointed image according to the image weight value of each appointed image;
and displaying each appointed image on the corresponding position of the appointed display equipment according to the corresponding relation between the image sequence and the display position of the image on the appointed display equipment.
In one embodiment, the plurality of first images are acquired by a plurality of image acquisition devices respectively arranged in a preset area, and no overlapping acquisition area exists among the plurality of image acquisition devices.
In the embodiment of the application, each second image, the corresponding image category and the mapping relationship between each second image and the corresponding image category in the second image library are identified to obtain corresponding identification information; and responding to a first selection instruction of a user for selecting the images of the designated categories, and selecting at least one designated image from the second image library according to the identification information to serve as the designated image selected by the user. Therefore, the selected designated images can be accurately indexed according to the identification information, and at least one designated image can be quickly and intelligently selected from the second image library to serve as the designated image selected by the user. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. An image selection method, comprising:
constructing a first image library by a plurality of first images acquired by a plurality of image acquisition devices;
preprocessing each first image in the first image library according to a preprocessing model to obtain a corresponding second image, and forming a second image library by a plurality of second images;
classifying each second image in the second image library according to a preset neural network model for classifying the images to obtain a corresponding image category;
identifying each second image, the corresponding image category and the mapping relation between each second image and the corresponding image category in the second image library to obtain corresponding identification information;
and responding to a first selection instruction of a user for selecting the images of the designated category, and selecting at least one designated image from the second image library according to the identification information to be used as the designated image selected by the user.
2. The method according to claim 1, wherein the pre-processing model comprises a first pre-processing model capable of highlighting at least one key information element of the set of key information elements, and wherein pre-processing each first image in the first image library according to the pre-processing model to obtain a corresponding second image comprises:
according to a first preprocessing model, each first image in the first image library is subjected to first preprocessing to obtain a corresponding second image with at least one key information element highlighted, wherein the first preprocessing is preprocessing for highlighting the at least one key information element.
3. The method of claim 2, wherein prior to said pre-processing each first image in the first image library according to a first pre-processing model, the method further comprises:
reading at least one key information element;
wherein the key information elements include at least one of:
a feature information element of the subject object in the first image, a facial expression information element of the subject object in the first image, and an accessory information element of the subject object in the first image.
4. The method of claim 1, wherein the pre-processing model comprises a second pre-processing model capable of removing at least one unrelated background object and/or background person, and wherein pre-processing each first image in the first image library according to the pre-processing model to obtain a corresponding second image further comprises:
and according to a second preprocessing model, carrying out second preprocessing on each first image in the first image library to obtain a corresponding second image with at least one background object and/or background person removed, wherein the second preprocessing is preprocessing for removing at least one unrelated background object and/or background person.
5. The method of claim 1, wherein after said selecting at least one designated image from said second image library based on said identification information, said method further comprises:
and responding to a second selection instruction of a user for selecting the appointed display equipment, and displaying at least one selected appointed image on the corresponding display equipment, wherein the second selection instruction carries the MAC address information of the appointed display equipment.
6. The method according to claim 5, wherein the displaying the selected at least one designated image on the corresponding display device comprises:
under the condition that the number of the designated images is two or more, respectively calculating the weight value of each designated image;
sequencing each appointed image according to the image weight value of each appointed image;
and displaying each appointed image on the corresponding position of the appointed display equipment according to the corresponding relation between the image sequence and the display position of the image on the appointed display equipment.
7. The method of claim 1,
the plurality of first images are acquired by a plurality of image acquisition devices respectively arranged in a preset area, and no overlapping acquisition area exists among the plurality of image acquisition devices.
8. An image selection apparatus, comprising:
the first image library construction module is used for constructing a first image library by a plurality of first images acquired by a plurality of image acquisition devices;
the preprocessing module is used for preprocessing each first image in the first image library according to a preprocessing model to obtain a corresponding second image;
the second image library construction module is used for constructing a second image library by a plurality of second images obtained by preprocessing of the preprocessing module;
the image classification module is used for classifying each second image in the second image library according to a preset neural network model for classifying the images to obtain a corresponding image category;
the identification module is used for identifying each second image in the second image library, the corresponding image category and the mapping relation between each second image and the corresponding image category to obtain corresponding identification information;
and the image selecting module is used for responding to a first selecting instruction of a user for selecting the images of the appointed types, and selecting at least one appointed image from the second image library as the appointed image selected by the user according to the identification information.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 7.
CN202010604116.1A 2020-06-29 2020-06-29 Image selection method and device, storage medium and terminal Pending CN111897986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010604116.1A CN111897986A (en) 2020-06-29 2020-06-29 Image selection method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010604116.1A CN111897986A (en) 2020-06-29 2020-06-29 Image selection method and device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN111897986A true CN111897986A (en) 2020-11-06

Family

ID=73206496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010604116.1A Pending CN111897986A (en) 2020-06-29 2020-06-29 Image selection method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN111897986A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108289224A (en) * 2017-12-12 2018-07-17 北京大学 A kind of video frame prediction technique, device and neural network is compensated automatically
CN111008670A (en) * 2019-12-20 2020-04-14 云南大学 Fungus image identification method and device, electronic equipment and storage medium
CN111126180A (en) * 2019-12-06 2020-05-08 四川大学 Facial paralysis severity automatic detection system based on computer vision
CN111222557A (en) * 2019-12-31 2020-06-02 Oppo广东移动通信有限公司 Image classification method and device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108289224A (en) * 2017-12-12 2018-07-17 北京大学 A kind of video frame prediction technique, device and neural network is compensated automatically
CN111126180A (en) * 2019-12-06 2020-05-08 四川大学 Facial paralysis severity automatic detection system based on computer vision
CN111008670A (en) * 2019-12-20 2020-04-14 云南大学 Fungus image identification method and device, electronic equipment and storage medium
CN111222557A (en) * 2019-12-31 2020-06-02 Oppo广东移动通信有限公司 Image classification method and device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗建豪 等: "《基于深度卷积特征的细粒度图像分类研究综述》", 《自动化学报》 *

Similar Documents

Publication Publication Date Title
US11151723B2 (en) Image segmentation method, apparatus, and fully convolutional network system
CN111444365B (en) Image classification method, device, electronic equipment and storage medium
CN112651438A (en) Multi-class image classification method and device, terminal equipment and storage medium
CN106777007A (en) Photograph album Classified optimization method, device and mobile terminal
CN107368550B (en) Information acquisition method, device, medium, electronic device, server and system
CN111985465B (en) Text recognition method, device, equipment and storage medium
CN111353956B (en) Image restoration method and device, computer equipment and storage medium
CN112686314B (en) Target detection method and device based on long-distance shooting scene and storage medium
CN105760458A (en) Picture processing method and electronic equipment
CN110929063A (en) Album generating method, terminal device and computer readable storage medium
CN110532448B (en) Document classification method, device, equipment and storage medium based on neural network
CN111967478B (en) Feature map reconstruction method, system, storage medium and terminal based on weight overturn
CN111274145A (en) Relationship structure chart generation method and device, computer equipment and storage medium
CN111897986A (en) Image selection method and device, storage medium and terminal
CN111160240A (en) Image object recognition processing method and device, intelligent device and storage medium
CN111325816B (en) Feature map processing method and device, storage medium and terminal
CN113408571B (en) Image classification method and device based on model distillation, storage medium and terminal
CN114022658B (en) Target detection method, target detection device, storage medium and terminal
CN111292247A (en) Image processing method and device
CN110969674B (en) Method and device for generating winding drawing, terminal equipment and readable storage medium
CN112070718A (en) Method and device for determining regional quantization parameter, storage medium and terminal
CN116823869A (en) Background replacement method and electronic equipment
CN110008907B (en) Age estimation method and device, electronic equipment and computer readable medium
CN112950167A (en) Design service matching method, device, equipment and storage medium
CN111918137A (en) Push method and device based on video characteristics, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201106