CN111510376B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN111510376B
CN111510376B CN202010344388.2A CN202010344388A CN111510376B CN 111510376 B CN111510376 B CN 111510376B CN 202010344388 A CN202010344388 A CN 202010344388A CN 111510376 B CN111510376 B CN 111510376B
Authority
CN
China
Prior art keywords
image
avatar
uploading
head portrait
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010344388.2A
Other languages
Chinese (zh)
Other versions
CN111510376A (en
Inventor
张继丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010344388.2A priority Critical patent/CN111510376B/en
Publication of CN111510376A publication Critical patent/CN111510376A/en
Application granted granted Critical
Publication of CN111510376B publication Critical patent/CN111510376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and relates to the image processing technology in the technical field of computers. The specific implementation scheme is as follows: acquiring a first uploading image of a target user; under the condition that the size of the first uploading image is larger than a preset size, the first uploading image is input into an image area prediction model of the target user, and a target image area is obtained through prediction, wherein the target image area is a partial image area in the first uploading image; outputting a first head portrait based on the target image area. Therefore, when the target user uploads the first uploading image, the head portrait required by the user can be predicted through the image area prediction model, the operation of the user in the image uploading process is reduced, and the efficiency of the user in uploading the personal head portrait is improved.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to image processing technologies in the field of computer technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
With the rapid development of internet technology, social applications (such as social platforms or instant messaging applications) make people communicate more and more convenient. Wherein, the user can be used as one of the main identity identifications of the person by uploading the personal head portrait of the social application. However, the operation of uploading the personal avatar by the user may be complicated at present, for example, after the user uploads the original image, the user needs to select an image area as the personal avatar in the original image through multiple operations, so that the efficiency of uploading the personal avatar by the user is low.
Disclosure of Invention
An image processing method, an image processing device and an electronic device are provided to solve the problem that the efficiency of uploading a personal head portrait by a user is low at present.
According to a first aspect, there is provided an image processing method applied to an electronic device, comprising:
acquiring a first uploading image of a target user;
under the condition that the size of the first uploading image is larger than a preset size, the first uploading image is input into an image area prediction model of the target user, and a target image area is obtained through prediction, wherein the target image area is a partial image area in the first uploading image;
outputting a first head portrait based on the target image area.
According to a second aspect, there is also provided an image processing apparatus applied to an electronic device, including:
the uploading image acquisition module is used for acquiring a first uploading image of a target user;
the prediction module is used for inputting the first uploading image into an image area prediction model of the target user under the condition that the size of the first uploading image is larger than a preset size, and predicting to obtain a target image area, wherein the target image area is a partial image area in the first uploading image;
and the head portrait output module is used for outputting a first head portrait based on the target image area.
According to a third aspect, there is also provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
A fourth aspect of the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of the first aspect described above.
In the application, a first uploading image of a target user is obtained; under the condition that the size of the first uploading image is larger than a preset size, the first uploading image is input into an image area prediction model of the target user, and a target image area is obtained through prediction, wherein the target image area is a partial image area in the first uploading image; outputting a first head portrait based on the target image area. Therefore, when the target user uploads the first uploading image, the head portrait required by the user can be predicted through the image area prediction model, the operation of the user in the image uploading process is reduced, and the efficiency of the user in uploading the personal head portrait is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present application;
FIG. 2 is one of the schematic diagrams according to a second embodiment of the present application;
FIG. 3 is a second schematic diagram according to a second embodiment of the present application;
FIG. 4 is a third schematic diagram according to a second embodiment of the present application;
FIG. 5 is a fourth schematic view in accordance with a second embodiment of the present application;
FIG. 6 is a fifth schematic view in accordance with a second embodiment of the present application;
FIG. 7 is a sixth schematic view of a second embodiment according to the present application;
fig. 8 is a block diagram of an electronic device for implementing the image processing method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Referring to fig. 1, an image processing method provided in an embodiment of the present application may be applied to an electronic device, as shown in fig. 1, where the image processing method includes the following steps:
step 101, acquiring a first uploading image of a target user.
In this application, the obtaining of the first upload image of the target user may be that the electronic device uses the upload image received at the current time as the first image, or may also be that the upload image received and cached before the current time is called.
It should be noted that the received uploaded image may be an image uploaded by a target user at a client (such as a mobile phone or a tablet computer) and sent to a server when the electronic device is the server; alternatively, when the electronic device is a client for use by a target user, the image may be directly uploaded in accordance with an operation of the target user, and the electronic device is not limited to this.
For example, the electronic device is a server, when the client receives an image uploaded by a user in a displayed social platform avatar uploading page, the client sends the uploaded image to the server, and the server acquires the uploaded image.
And 102, inputting the first uploading image into an image area prediction model of the target user under the condition that the size of the first uploading image is larger than a preset size, and predicting to obtain a target image area.
The target image area is a partial image area in the first uploading image.
In the application, after the electronic device acquires the first uploaded image of the target user, the electronic device may acquire the size of the first uploaded image, compare the size of the first uploaded image with a preset size, and input the first uploaded image to the image area prediction model of the target user to determine the target image area in the first uploaded image when the size of the first uploaded image is larger than the preset size.
When the size of the first upload image is larger than the preset size, the user generally needs to select an image of a partial image area in the first upload image as a head portrait, that is, the target image area is a partial image area that is determined by the electronic device in the first upload image through the image area prediction model and is needed by the user.
In addition, the image region prediction model may be any prediction model that can predict the target image region in the first uploaded image, and specifically, the image region prediction model may be a deep learning model, so that the prediction accuracy may be improved.
It should be noted that, the image area prediction model may be preset for each user in the electronic device, and the image area prediction models of different users may have different model parameters because different users have different operation habits for selecting images as head portraits in the uploaded images. Of course, the image region prediction models of different users may also have the same model parameters, that is, the same image region prediction model is preset as the image prediction models of different users, and is not limited herein.
In some embodiments, before step 102, the method may further include:
acquiring at least one training sample, wherein the training sample comprises a second uploading image uploaded by the target user and operation information of an avatar uploading operation input by the target user, and the avatar uploading operation is used for selecting a partial image area in the second uploading image to generate an avatar;
and training to obtain the image region prediction model based on the at least one training sample.
Here, the electronic device may obtain the image area prediction model through training of the at least one training sample obtained in a historical manner, and the training sample includes a historical upload image (i.e., a second upload image) of the target user and operation information of the avatar upload operation input by the target user, so that the target image area predicted by the image area prediction model is more consistent with an image area selected by actual operation of the target user, thereby improving prediction accuracy of the image area prediction model.
In the above embodiment, the second upload image is an image uploaded during an avatar upload process before the first upload image is uploaded, and the target user inputs an avatar upload operation for the second upload image during the avatar upload process.
The avatar upload operation may be any operation for selecting an image area for generating an avatar in the second upload image. For example, when the frame to be selected for the avatar is displayed in a suspended manner in the avatar uploading interface, the user may input a dragging operation for dragging the second uploaded image to move, so that an image that is required to be the avatar in the second uploaded image is within the frame to be selected for the avatar.
In addition, the operation information may be operation information generated by a user in a process of selecting an image of a partial image region in the second upload image as the avatar through the avatar upload operation, and the operation information may be any information capable of reflecting an operation habit of the target user in selecting the image region in the avatar upload process.
For example, in the case that the avatar uploading operation is the dragging operation, the operation information may include a dragging trace of the dragging operation, position information of the selected partial image area in the second upload image, or distance information between an edge of the selected partial image area and an edge of the second upload image, and the like.
It should be noted that the at least one training sample generally includes a large number of training samples (e.g., thousands or even more than one hundred thousand training samples); in addition, the process of training the image region prediction model may be to form a training set from a part of the training samples in the at least one training sample, and use a part of the training samples as a verification set, continuously train and iterate an initial training model (such as an initial deep learning model) through the training set and the verification set, and stop iteration until the precision of the deep learning model obtained by training satisfies a condition, so as to obtain the image region prediction model.
In this application, the preset size may be a size preset manually, or may also be a value determined according to the size of the uploaded avatar, for example, the preset size may be when the avatar to-be-selected frame is displayed in a floating manner in the avatar uploading interface, and so on. In addition, the preset size may be a predicted size of the target image area.
In some embodiments, before step 102, the method may further include:
acquiring at least one second head portrait, wherein each second head portrait is a head portrait generated by the electronic equipment based on head portrait uploading operation input by the target user;
and determining the preset size based on the size of the head portrait of part or all of the at least one training sample.
Here, the electronic device may determine the preset size based on the size of the avatar (i.e., the second avatar) generated during uploading of the avatar in the history of the target user, so that it may determine the avatar more satisfying the actual needs of the target user.
The obtaining of the at least one second avatar may be obtaining an avatar of a part or all of the at least one training sample as the second at least one second avatar.
In addition, the determining the preset size based on the size of the at least one second avatar may be determining the size of any one of the at least one second avatar as the preset size, or determining an average value of the sizes of some or all of the at least one second avatar as the preset size.
In addition, in the case that the size of the first uploaded image is smaller than or equal to a preset size, the electronic device may not perform any processing on the first uploaded image.
Alternatively, in some embodiments, after acquiring the first uploaded image of the target user, the method may further include:
and under the condition that the size of the first uploaded image is smaller than the preset size, performing image fidelity processing on the first uploaded image to obtain a fourth head image.
Here, when the size of the first uploaded image is smaller than the preset size, the electronic device may perform image assurance processing on the first uploaded image, so that the obtained fourth avatar can still maintain a higher resolution when being enlarged to a size (such as the preset size) required by a user, and display quality of the output avatar is improved.
And 103, outputting a first head portrait based on the target image area.
In this application, after the electronic device predicts the target image area in the first upload image, the electronic device may output the first head portrait based on the target image area.
The first avatar may be output based on the target image area, and the first avatar may be generated with the predicted size of the image of the target image area.
In addition, the outputting the first avatar may be outputting the first avatar to a client for a user to use, and displaying the first avatar in an avatar uploading interface displayed by the client to show the first avatar to the user, so that the user can determine whether to use the first avatar according to the shown first avatar.
In some embodiments, the method may further include:
acquiring at least one third head portrait uploaded by the target user history;
determining at least one recommended avatar based on the at least one third avatar;
outputting the at least one recommended avatar to recommend the at least one recommended avatar to the target user.
Here, the electronic device may determine at least one recommended avatar according to the avatar uploaded by the target user in history, and output the at least one recommended avatar, so as to actively recommend the avatar suitable for the user, and make the operation of setting the avatar by the user more convenient.
The process may be that, when the client receives a preset operation input by the target user, the electronic device acquires the at least one third avatar to determine the at least one recommended avatar, and when the client displays the avatar uploading page based on the preset operation, the at least one recommended avatar is output to the avatar uploading page to be displayed, so that the target user is recommended with the at least one recommended avatar, and the target user can select the avatar to be uploaded from the recommended at least one recommended avatar.
It should be noted that each recommended avatar may be an avatar with a preset size, so that the user can directly upload the avatar; or, the avatar with a size larger than a preset size may also be used, and then the electronic device may process the third image selected by the target user through the above steps 101 to 103; or, the size of the avatar may be smaller than a preset size, and the electronic device may perform image fidelity processing on the third image selected by the target user, which is not limited herein.
In addition, the determining of the at least one recommended avatar based on the at least one third avatar may be obtaining an image type of each third avatar, and determining a target image type, where the target image type is an image type that is possessed by at most third avatars in the obtained image types; and selecting at least one recommended head portrait in a target image library corresponding to the target image type.
For example, 10 avatars are uploaded by the target user history, wherein the image type of 5 avatars is type 1, the image type of 3 avatars is type 2, and the image type of 2 avatars is type 3, if the type 1 and the image library 1 have a preset corresponding relationship, the electronic device may select at least one avatar in the image library 1 as the at least one recommended avatar, and if the image library 1 has 100 avatars, the electronic device may randomly select 10 avatars as the at least one recommended avatar, and so on.
In some embodiments, the determining at least one recommended avatar based on the at least one third avatar may include:
acquiring identity information of the target user;
generating label information of the target user based on the identity information of the target user and the at least one third avatar;
and determining at least one recommended head portrait with an association relation with a target tag group to which the tag information belongs based on the association relation between a preset tag group and the head portrait.
Here, the electronic device may determine the target tag group associated with the target user according to the identity information of the target user and the at least one third avatar, and determine the at least one recommended avatar according to the target tag group, so as to improve efficiency of recommending the avatar to the user by the electronic device.
The identity information of the target user may include login information (such as a unique identity) of the target user and other information, such as age.
In addition, the determining, based on the preset association relationship between the tag group and the avatar, of the at least one recommended avatar having an association relationship with the target tag group to which the tag information belongs may be that the electronic device determines an avatar library associated with the target tag group (that is, the target tag group has a preset association relationship with all avatars in the image library); obtaining historical behavior information of the target user, wherein the historical behavior information can comprise preferences of the target user (such as favorite videos in a social application, collections, downloads, social groups and the like); and searching the head portrait matched with the historical behavior information in a head portrait library associated with the target label group, and taking the searched head portrait as the at least one recommended head portrait.
In the application, a first uploading image of a target user is obtained; under the condition that the size of the first uploading image is larger than a preset size, the first uploading image is input into an image area prediction model of the target user, and a target image area is obtained through prediction, wherein the target image area is a partial image area in the first uploading image; outputting a first head portrait based on the target image area. Therefore, when the target user uploads the first uploading image, the head portrait required by the user can be predicted through the image area prediction model, the operation of the user in the image uploading process is reduced, and the efficiency of the user in uploading the personal head portrait is improved.
Referring to fig. 2, an embodiment of the present application provides an image processing apparatus applied to an electronic device, and as shown in fig. 2, the image processing apparatus 200 includes:
an uploaded image acquiring module 201, configured to acquire a first uploaded image of a target user;
the prediction module 202 is configured to, when the size of the first upload image is larger than a preset size, input the first upload image to an image area prediction model of the target user, and predict to obtain a target image area, where the target image area is a partial image area in the first upload image;
and the head portrait output module 203 is configured to output a first head portrait based on the target image area.
Optionally, as shown in fig. 3, the apparatus 200 further includes:
a training sample obtaining module 204, configured to obtain at least one training sample, where the training sample includes a second uploaded image uploaded by the target user and operation information of an avatar uploading operation input by the target user, and the avatar uploading operation is used to select a partial image area in the second uploaded image to generate an avatar;
a model training module 205, configured to train to obtain the image region prediction model based on the at least one training sample.
Optionally, each training sample further includes an avatar generated by the electronic device based on the avatar uploading operation;
as shown in fig. 4, the apparatus 200 further includes:
a first avatar obtaining module 206, configured to obtain at least one second avatar, where each second avatar is an avatar generated by the electronic device based on an avatar uploading operation input by the target user;
a preset size determining module 207, configured to determine the preset size based on the size of the at least one second avatar.
Optionally, as shown in fig. 5, the apparatus 200 further includes:
a second avatar obtaining module 208, configured to obtain at least one third avatar uploaded by the target user history;
a recommended avatar determination module 209, configured to determine at least one recommended avatar based on the at least one third avatar;
a recommended avatar output module 210, configured to output the at least one recommended avatar, so as to recommend the at least one recommended avatar to the target user.
Optionally, as shown in fig. 6, the avatar recommendation module 209 includes:
an identity information obtaining unit 2091, configured to obtain identity information of the target user;
a tag information generating unit 2092, configured to generate tag information of the target user based on the identity information of the target user and the at least one third avatar;
the recommended avatar determination unit 2093 is configured to determine, based on the association relationship between the preset tag group and the avatar, at least one recommended avatar having an association relationship with the target tag group to which the tag information belongs.
Optionally, as shown in fig. 7, the apparatus 200 further includes:
the image fidelity module 210 is configured to perform image fidelity processing on the first uploaded image to obtain a fourth head image when the size of the first uploaded image is smaller than the preset size.
Optionally, the image region prediction model is a deep learning model.
It should be noted that, the image processing apparatus 200 is capable of implementing each process implemented by the electronic device in the embodiment of the method in fig. 1 of the present application, and achieving the same beneficial effects, and for avoiding repetition, details are not described here again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 8, is a block diagram of an electronic device according to an image processing method of an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 8, the electronic apparatus includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 8 illustrates an example of a processor 801.
The memory 802 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the image processing method provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the image processing method provided by the present application.
The memory 802, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiment of the present application (for example, the uploaded image acquisition module 201, the prediction module 202, and the avatar output module 203 shown in fig. 2). The processor 801 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 802, that is, implements the image processing method in the above-described method embodiment.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device based on detection of echo delay, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 may optionally include memory located remotely from the processor 801, which may be connected over a network to the electronic device for detection of echo time delays. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the image processing method may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, and are exemplified by a bus in fig. 8.
The input device 803 may receive input numeric or character information, and generate key signal inputs related to user settings and function control of the electronic apparatus for detection of echo time delay, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In the application, a dial picture is obtained; detecting at least one text center line in the dial picture and an enclosing frame corresponding to each text center line; identifying text content of each line of text in the dial picture based on the at least one text centerline and a bounding box corresponding to each text centerline. Therefore, the electronic equipment can accurately and quickly identify the text content in the dial picture, so that the identification accuracy and efficiency of the metering content of the instrument panel are improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. An image processing method applied to an electronic device, comprising:
acquiring a first uploading image of a target user;
under the condition that the size of the first uploading image is larger than a preset size, inputting the first uploading image into an image area prediction model of the target user, and predicting to obtain a target image area, wherein the target image area is a partial image area in the first uploading image;
outputting a first head portrait based on the target image area;
before the first uploaded image is input to the image region prediction model of the target user and a target image region is predicted, the method further includes:
acquiring at least one training sample, wherein the training sample comprises a second uploading image uploaded by the target user and operation information of an avatar uploading operation input by the target user, the avatar uploading operation is used for selecting partial image areas in the second uploading image to generate an avatar, and the operation information is used for representing an operation habit of the target user for selecting the image areas in the avatar uploading process;
and training to obtain the image region prediction model based on the at least one training sample.
2. The method according to claim 1, wherein when the size of the first upload image is larger than a preset size, the inputting the first upload image to an image area prediction model of the target user further includes, before predicting a target image area:
acquiring at least one second head portrait, wherein each second head portrait is a head portrait generated by the electronic equipment based on head portrait uploading operation input by the target user;
determining the preset size based on the size of the at least one second avatar.
3. The method of claim 1, further comprising:
obtaining at least one third head portrait uploaded by the target user history;
determining at least one recommended avatar based on the at least one third avatar;
outputting the at least one recommended avatar to recommend the at least one recommended avatar to the target user.
4. The method of claim 3, wherein determining at least one recommended avatar based on the at least one third avatar comprises:
acquiring identity information of the target user;
generating label information of the target user based on the identity information of the target user and the at least one third avatar;
and determining at least one recommended head portrait with an association relation with a target tag group to which the tag information belongs based on the association relation between a preset tag group and the head portrait.
5. The method of claim 1, wherein after obtaining the first uploaded image of the target user, further comprising:
and under the condition that the size of the first uploaded image is smaller than the preset size, performing image fidelity processing on the first uploaded image to obtain a fourth head image.
6. The method of claim 1, wherein the image region prediction model is a deep learning model.
7. An image processing apparatus applied to an electronic device, comprising:
the uploading image acquisition module is used for acquiring a first uploading image of a target user;
the prediction module is used for inputting the first uploading image into an image area prediction model of the target user under the condition that the size of the first uploading image is larger than a preset size, and predicting to obtain a target image area, wherein the target image area is a partial image area in the first uploading image;
the head portrait output module is used for outputting a first head portrait based on the target image area;
the device, still include:
the training sample acquisition module is used for acquiring at least one training sample, wherein the training sample comprises a second uploading image uploaded by the target user and operation information of the head portrait uploading operation input by the target user, the head portrait uploading operation is used for selecting partial image areas in the second uploading image to generate a head portrait, and the operation information is used for representing the operation habit of the target user for selecting the image areas in the head portrait uploading process;
and the model training module is used for obtaining the image region prediction model based on the training of the at least one training sample.
8. The apparatus of claim 7, further comprising:
the first head portrait acquisition module is used for acquiring at least one second head portrait, wherein each second head portrait is a head portrait generated by the electronic equipment based on head portrait uploading operation input by the target user;
a preset size determination module for determining the preset size based on the size of the at least one second avatar.
9. The apparatus of claim 7, further comprising:
the second head portrait acquisition module is used for acquiring at least one third head portrait uploaded by the target user history;
a recommended avatar determination module to determine at least one recommended avatar based on the at least one third avatar;
and the recommended head portrait output module is used for outputting the at least one recommended head portrait so as to recommend the at least one recommended head portrait to the target user.
10. The apparatus of claim 9, wherein the recommended avatar determination module comprises:
the identity information acquisition unit is used for acquiring the identity information of the target user;
a tag information generating unit, configured to generate tag information of the target user based on the identity information of the target user and the at least one third avatar;
and the recommended head portrait determining unit is used for determining at least one recommended head portrait with an association relation with a target tag group to which the tag information belongs based on the association relation between a preset tag group and the head portrait.
11. The apparatus of claim 7, further comprising:
and the image fidelity module is used for performing image fidelity processing on the first uploaded image to obtain a fourth head image under the condition that the size of the first uploaded image is smaller than the preset size.
12. The apparatus of claim 7, wherein the image region prediction model is a deep learning model.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202010344388.2A 2020-04-27 2020-04-27 Image processing method and device and electronic equipment Active CN111510376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010344388.2A CN111510376B (en) 2020-04-27 2020-04-27 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010344388.2A CN111510376B (en) 2020-04-27 2020-04-27 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111510376A CN111510376A (en) 2020-08-07
CN111510376B true CN111510376B (en) 2022-09-20

Family

ID=71874914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010344388.2A Active CN111510376B (en) 2020-04-27 2020-04-27 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111510376B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011298B (en) * 2021-03-09 2023-12-22 阿波罗智联(北京)科技有限公司 Truncated object sample generation, target detection method, road side equipment and cloud control platform

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977924A (en) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 For real time image processing and system on the unmanned plane machine of crops
JP2019205103A (en) * 2018-05-25 2019-11-28 キヤノン株式会社 Information processing apparatus, information processing method, and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295567B (en) * 2016-08-10 2019-04-12 腾讯科技(深圳)有限公司 A kind of localization method and terminal of key point
CN106952239A (en) * 2017-03-28 2017-07-14 厦门幻世网络科技有限公司 image generating method and device
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN108629319B (en) * 2018-05-09 2020-01-07 北京嘀嘀无限科技发展有限公司 Image detection method and system
CN108984657B (en) * 2018-06-28 2020-12-01 Oppo广东移动通信有限公司 Image recommendation method and device, terminal and readable storage medium
CN110827261B (en) * 2019-11-05 2022-12-06 泰康保险集团股份有限公司 Image quality detection method and device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019205103A (en) * 2018-05-25 2019-11-28 キヤノン株式会社 Information processing apparatus, information processing method, and program
CN109977924A (en) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 For real time image processing and system on the unmanned plane machine of crops

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于上下文情景结构的图像语义分割;陈乔松等;《重庆邮电大学学报(自然科学版)》;20200415(第02期);全文 *

Also Published As

Publication number Publication date
CN111510376A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN112036509A (en) Method and apparatus for training image recognition models
CN111539514A (en) Method and apparatus for generating structure of neural network
CN112102448B (en) Virtual object image display method, device, electronic equipment and storage medium
CN111104514A (en) Method and device for training document label model
CN111582477B (en) Training method and device for neural network model
CN111079945B (en) End-to-end model training method and device
CN111563541B (en) Training method and device of image detection model
EP3869393B1 (en) Image recognition method and apparatus, electronic device, and medium
EP3901905B1 (en) Method and apparatus for processing image
CN111241810A (en) Punctuation prediction method and device
CN112329453B (en) Method, device, equipment and storage medium for generating sample chapter
CN112529180A (en) Method and apparatus for model distillation
CN111523007A (en) User interest information determination method, device, equipment and storage medium
CN111510376B (en) Image processing method and device and electronic equipment
CN112100530B (en) Webpage classification method and device, electronic equipment and storage medium
CN112329732A (en) Model generation method and device, electronic equipment and storage medium
CN112529181A (en) Method and apparatus for model distillation
CN112561059A (en) Method and apparatus for model distillation
CN111767990A (en) Neural network processing method and device
CN111563202A (en) Resource data processing method, device, electronic equipment and medium
CN111767988B (en) Fusion method and device of neural network
CN112598136B (en) Data calibration method and device
CN112508830B (en) Training method, device, equipment and storage medium of image processing model
CN112508093B (en) Self-training method and device, electronic equipment and readable storage medium
US11488384B2 (en) Method and device for recognizing product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant