CN110622218A - Image display method, device, storage medium and terminal - Google Patents
Image display method, device, storage medium and terminal Download PDFInfo
- Publication number
- CN110622218A CN110622218A CN201780090737.9A CN201780090737A CN110622218A CN 110622218 A CN110622218 A CN 110622218A CN 201780090737 A CN201780090737 A CN 201780090737A CN 110622218 A CN110622218 A CN 110622218A
- Authority
- CN
- China
- Prior art keywords
- human body
- dimensional image
- image model
- module
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000003860 storage Methods 0.000 title claims abstract description 15
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims description 15
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 238000003786 synthesis reaction Methods 0.000 claims description 12
- 230000009977 dual effect Effects 0.000 claims description 7
- 210000000056 organ Anatomy 0.000 description 25
- 230000006870 function Effects 0.000 description 12
- 230000000694 effects Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses an image display method, an image display device, a storage medium and a terminal; the method comprises the following steps: the method comprises the steps of collecting at least two human body images through two cameras of a terminal, obtaining depth information of a human body according to the human body images, establishing a three-dimensional image model of the human body according to the depth information of the human body, synthesizing the three-dimensional image model of a target virtual garment and the three-dimensional image model of the human body to obtain a target three-dimensional image model, and displaying the target three-dimensional image model.
Description
The invention relates to the field of mobile communication, in particular to an image display method, an image display device, a storage medium and a terminal.
The virtual fitting means that a computer technology is utilized to enable a virtual model to replace real users to try on clothes sold on the internet, and reference is formed for the users to choose the clothes on the internet through the effect presented by the virtual model trying on, so that the users can conveniently purchase proper clothes. The current virtual fitting scheme mainly utilizes a virtual fitting model in a gallery, and a user selects the virtual fitting model and clothes, so that clothes can be selected according to the effect of the model wearing the clothes. The fitting mode lacks real detailed effect due to the difference between the body type and the model of the user, and meanwhile, the effects of fabrics, folds and the like are not considered, so that the actual fitting requirements of customers cannot be met. In this case, the customer cannot obtain a real fitting effect.
Disclosure of Invention
The embodiment of the invention provides an image display method, an image display device, a storage medium and a terminal, which can improve the fitting effect of a user.
In a first aspect, an embodiment of the present invention provides an image display method, including:
acquiring at least two human body images through two cameras of a terminal;
acquiring depth information of the human body according to the at least two human body images;
establishing a three-dimensional image model of the human body according to the depth information of the human body;
synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
and displaying the target three-dimensional image model.
In a second aspect, an embodiment of the present invention further provides an image display apparatus, including: the human body image acquisition module, the human body depth acquisition module, the human body model building module, the synthesis module and the display module;
the human body image acquisition module is used for acquiring at least two human body images through the double cameras of the terminal;
the human body depth acquisition module is used for acquiring the depth information of the human body according to the at least two human body images;
the human body model establishing module is used for establishing a three-dimensional image model of the human body according to the depth information of the human body;
the synthesis module is used for synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
and the display module is used for displaying the target three-dimensional image model.
In a third aspect, the present invention also provides a storage medium, wherein the storage medium stores instructions that are loaded by a processor to perform the steps of:
acquiring at least two human body images through two cameras of a terminal;
acquiring depth information of the human body according to the at least two human body images;
establishing a three-dimensional image model of the human body according to the depth information of the human body;
synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
and displaying the target three-dimensional image model.
In a fourth aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and a processor, where the memory stores instructions, and the processor loads the instructions to perform the following steps:
acquiring at least two human body images through two cameras of a terminal;
acquiring depth information of the human body according to the at least two human body images;
establishing a three-dimensional image model of the human body according to the depth information of the human body;
synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
and displaying the target three-dimensional image model.
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scene framework of an image display method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of an image display method according to an embodiment of the present invention.
Fig. 3 is a schematic view of a double-camera viewing range according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an image conversion of the image display method according to the embodiment of the present invention.
Fig. 5 is another schematic flow chart of an image display method according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present invention.
Fig. 7 is another schematic structural diagram of an image display device according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an image display device according to an embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Fig. 10 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present invention are described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, but on the contrary, it is to be understood that various steps and operations described hereinafter may be implemented in hardware.
The principles of the present invention are operational with numerous other general purpose or special purpose computing, communication environments or configurations. Examples of well known computing systems, environments, and configurations that may be suitable for use with the invention include, but are not limited to, hand-held telephones, personal computers, servers, multiprocessor systems, microcomputer-based systems, mainframe-based computers, and distributed computing environments that include any of the above systems or devices.
The details will be described below separately.
The embodiment will be described from the perspective of an image display apparatus, which may be specifically integrated in a terminal, and the terminal may be an electronic device with two cameras, such as a mobile interconnection network device (e.g., a smart phone, a tablet computer).
Referring to fig. 1, fig. 1 is a schematic view of a scene framework of an image display method according to an embodiment of the present invention. The system comprises a terminal and a server, wherein the terminal and the server are in communication connection through the Internet.
When a user processes through a screen image display function in the terminal, the terminal can record input and output data in the processing process and then send the recorded data to the server, wherein the terminal can send the data to the server in a WEB mode or can send the data to the server through a client program installed in the terminal. The server can collect data sent by a plurality of terminals and process the received data based on machine deep learning so as to execute related functions.
Any of the following transmission protocols may be employed, but are not limited to, between the terminal and the server: HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol), P2P (Peer to Peer, Peer to Server and Peer), P2SP (Peer to Server & Peer), and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of an image display method according to an embodiment of the present invention, where the image display method according to the embodiment includes:
and S101, acquiring at least two human body images through two cameras of the terminal.
In the embodiment of the invention, two different images of the same scene can be acquired through the two cameras on the terminal. For example, the terminal receives a photographing instruction input by a user, and opens the dual cameras according to the photographing instruction, and at this time, the dual cameras can acquire the two paths of image data.
As shown in fig. 3, since the two camera modules of the dual camera are located at different positions, the viewing ranges of the two camera modules are different, so that the images respectively acquired by the two camera modules are different. For example, one of the two camera modules is a wide-angle lens with 1600 ten thousand pixels, the other is a telephoto lens with 2000 ten thousand pixels, and the viewing range is b, which are not illustrated herein.
In one embodiment, the original image is obtained by shooting through two cameras. The original image may include images of a plurality of parts of a human body, a plant, a building, and the like, and a human body image may be further determined from two original images acquired by the two cameras, where the method for determining the human body image may be various, for example, a face image in the images is identified by a face identification technology, and then the human body image is determined. Face recognition is a biometric technique for identifying an identity based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to capture an image or video stream containing a face with a camera or a video camera, automatically detect and track the face in the image, and then perform face recognition on the detected face. In addition, the face recognition may use an adaptive boosting (adaptive boosting) algorithm based on Haar features to detect the face in the original image, or use another algorithm to detect the face in the original image, which is not limited in this embodiment.
In an embodiment, if there are a plurality of human body images in the images captured by the two cameras, feature information of the plurality of human body images may be extracted respectively, and a target human body image in the plurality of human body images may be determined according to the feature information.
And step S102, acquiring depth information of the human body according to at least two human body images.
In the embodiment of the invention, the mobile phone is provided with two cameras, wherein one camera is a main camera, the other camera is an auxiliary camera, the preview image shot by the main camera is a main preview image, and the preview image shot by the auxiliary camera is an auxiliary preview image. Because a certain distance or angle exists between the main camera and the auxiliary camera, a certain phase difference exists between the main preview image and the auxiliary preview image, and the depth of field of each pixel block or even each pixel point can be obtained by utilizing the phase difference. As shown in fig. 4, an original image and a depth image of a current scene, respectively. After the two cameras simultaneously acquire two images of the same scene, corresponding pixel points in the two images are found through a stereo matching algorithm, parallax information can be calculated according to a triangular principle, and the parallax information can be used for representing depth information of objects in the scene through conversion. Based on the stereo matching algorithm, the depth image of the scene can be obtained by shooting a group of images of different angles in the same scene.
In an embodiment, the method for obtaining the depth information of the human body may be various, for example, the depth information of a plurality of organ points of the human body may be obtained, and then the depth information of the human body may be generated according to the depth information of the plurality of organ points. Wherein, the organ points refer to contour points on the human organ, and each human organ can comprise one or more contour points. The human body image may include a plurality of organ points, such as a nose contour point, a face contour point, a mouth contour point, an eye contour point, an arm contour point, a leg contour point, and the like, which is not limited in this embodiment.
After the at least two human body images are acquired, the organ points in the human body images can be positioned to obtain at least one organ point. When organ points are located, the terminal may use algorithms such as ASM (active shape model), AAM (active appearance model), or SDM (supervisory descent algorithm) to locate. After obtaining the organ points in the human body image, the terminal can obtain the depth information of each organ point obtained by positioning through the double cameras.
The depth information of the organ point is used for indicating the distance from the organ point to the terminal, and the negative correlation relationship is formed between the depth information of the organ point and the distance from the organ point to the terminal, that is, the larger the image depth value of the organ point is, the closer the distance from the organ point to the terminal is indicated, the smaller the image depth of the organ point is, and the farther the distance from the organ point to the terminal is indicated.
In an embodiment, when the terminal shoots the organ point through the two cameras, the distance between the organ point and the terminal can be determined by adopting a triangulation method due to different angles between the organ point and the two cameras, and the depth information of the organ point can be determined according to the distance between the organ point and the terminal. After the depth information of a plurality of organ points is acquired, the depth information of the human body is further generated.
And step S103, establishing a three-dimensional image model of the human body according to the depth information of the human body.
In this embodiment, a three-dimensional image model of a human body may be established using a three-dimensional Reconstruction technique (3D Reconstruction). The three-dimensional reconstruction is to establish a mathematical model suitable for computer representation and processing on a three-dimensional object, is the basis for processing, operating and analyzing the properties of the three-dimensional object in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer. Three-dimensional reconstruction builds surfaces by reconstructing three-dimensional coordinates in a computer, and therefore requires investigation of the points, lines, planes and their relationship to each other's position of the build object. The process of three-dimensional reconstruction of the human body can obtain the point cloud data of the human body by measuring the relative depth between the human body and the terminal, and the density of the point cloud directly determines the precision of the modeling object. And then, the surface shape of the human body can be finally obtained by searching and compensating the point cloud data. In addition, if the point cloud data of the human body contains color information on the position of the data point, color textures of the surface of the human body can be generated according to the neighborhood points of the point cloud.
And step S104, synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain the target three-dimensional image model.
In the embodiment of the invention, key points in the three-dimensional image model of the target virtual clothes can be extracted, and then the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body are synthesized according to the key points. For example, the three-dimensional image model of the target virtual garment may be first divided into at least 5 model parts of the trunk, the left and right arms, and the left and right legs, then at least six key points of the crotch point, the left and right shoulder points, the left and right underarm points, and the neck reference point of the model are obtained from the divided model parts, and then the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body are synthesized according to the six key points to obtain the target three-dimensional image model.
And step S105, displaying the target three-dimensional image model.
In an embodiment, before the target three-dimensional image model is displayed, image preprocessing may be performed on an image of the model, and the preprocessing may include image enhancement, smoothing, noise reduction, and the like. For example, after an image of the target three-dimensional image model is obtained, information in the image can be selectively enhanced and suppressed to improve the visual effect of the image, or the image can be converted into a form more suitable for machine processing to facilitate data extraction or recognition. For example, an image enhancement system may highlight the contours of an image with a high pass filter, thereby enabling a machine to measure the shape and perimeter of the contours. There are many methods for image enhancement, such as contrast stretching, logarithmic transformation, density layering, and histogram equalization, which can be used to change image grayscales and highlight details.
As can be seen from the above, the image display method provided by the embodiment of the present invention may acquire at least two human body images through the two cameras of the terminal, acquire depth information of a human body according to the at least two human body images, establish a three-dimensional image model of the human body according to the depth information of the human body, synthesize the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body to obtain a target three-dimensional image model, and display the target three-dimensional image model. The terminal based on the double cameras can establish the three-dimensional image model of the human body of the user, and is synthesized with the three-dimensional image model of the clothes, so that the virtual fitting is carried out, and the real fitting effect can be obtained.
The image display method of the present invention will be further explained below based on the description of the above embodiment.
Referring to fig. 5, fig. 5 is another schematic flow chart of an image display method according to an embodiment of the present invention, including:
step S201, at least two human body images are collected through two cameras of the terminal.
Because the positions of the two camera modules of the two cameras are different, the sizes of the at least two images acquired by the two camera modules may be different, and the at least two images can be processed by the embodiment, so that the sizes of the images are the same. In an embodiment, after at least two human body images of the user are acquired through the dual cameras of the terminal, the method further includes:
selecting the image with the smallest size of at least two human body images as a reference image;
and compressing other human body images according to the size of the reference image so as to enable at least two human body images to be the same in size.
Step S202, the distance between two camera modules of the double cameras is obtained.
And step S203, generating parallax information between at least two human body images according to the distance.
In step S204, depth information of the human body is calculated according to the parallax information.
The mobile phone is provided with a double-camera device, wherein one camera is a main camera, the other camera is an auxiliary camera, a preview image shot by the main camera is a main preview image, and a preview image shot by the auxiliary camera is an auxiliary preview image. Because a certain distance or angle exists between the main camera module and the auxiliary camera module, a certain phase difference exists between the main preview image and the auxiliary preview image, namely parallax information, and the depth of field of each pixel block and even each pixel point can be obtained by utilizing the phase difference.
And step S205, establishing a three-dimensional image model of the human body according to the depth information of the human body.
In one embodiment, a three-dimensional Reconstruction technique (3D Reconstruction) may be used to create a three-dimensional image model of the human body. The process of three-dimensional reconstruction of the human body can obtain the point cloud data of the human body by measuring the relative depth between the human body and the terminal, and the density of the point cloud directly determines the precision of the modeling object. And then, the surface shape of the human body can be finally obtained by searching and compensating the point cloud data.
Step S206, acquiring position information of a first feature point in a three-dimensional image model of the target virtual clothes.
In an embodiment, before this step, a three-dimensional image model of the target virtual clothing needs to be obtained, specifically, an image of the target virtual clothing is obtained, then depth information of the clothing is obtained according to the image, and a three-dimensional image model of the target virtual clothing is established according to the clothing depth information. The image of the target virtual clothes can be obtained by shooting through the double cameras of the terminal, and can also be downloaded from a network database. That is, before the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body are synthesized, the method further comprises the following steps:
acquiring at least two virtual clothes images of a target virtual clothes;
and acquiring the depth information of the clothes according to the at least two virtual clothes images, and establishing a three-dimensional image model of the target virtual clothes according to the depth information of the clothes.
And step S207, determining corresponding second characteristic point position information in the three-dimensional image model of the human body according to the first characteristic point position information.
And S208, synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body according to the position information of the first characteristic point and the position information of the second characteristic point to obtain the target three-dimensional image model.
In step S209, the target three-dimensional image model is displayed.
In an embodiment, after displaying the target three-dimensional image model, the user may further adjust relevant parameters of the apparel, such as color, size, and the like, and the three-dimensional image model also makes corresponding adjustments, that is, after displaying the target three-dimensional image model, the method may further include:
receiving a clothing parameter adjusting instruction input by a user;
adjusting the target three-dimensional image model according to the clothing adjusting instruction;
and displaying the adjusted target three-dimensional image model.
In an embodiment, after the user views the target three-dimensional image model, the user may further query related information of the apparel, such as price, manufacturer, date of manufacture, and the like, and after displaying the target three-dimensional image model, the method may further include:
receiving a query instruction of a user for the target virtual clothing, wherein the query instruction carries attribute information of the target virtual clothing;
and inquiring the price of the target clothes according to the attribute information.
From the above, the image display method provided by the embodiment of the invention can acquire at least two human body images through the two cameras of the terminal to obtain the distance between the two camera modules of the two cameras, generating parallax information between at least two human body images according to the distance, calculating depth information of the human body according to the parallax information, establishing a three-dimensional image model of the human body according to the depth information of the human body, acquiring the position information of a first characteristic point in the three-dimensional image model of the target virtual clothes, determining corresponding second characteristic point position information in the three-dimensional image model of the human body according to the first characteristic point position information, and synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body according to the position information of the first characteristic point and the position information of the second characteristic point to obtain a target three-dimensional image model, and displaying the target three-dimensional image model. The terminal based on the double cameras can establish the three-dimensional image model of the human body of the user, and is synthesized with the three-dimensional image model of the clothes, so that the virtual fitting is carried out, and the real fitting effect can be obtained.
In order to better implement the image display method provided by the embodiment of the invention, the embodiment of the invention also provides a device based on the image display method. The terms are the same as those in the image display method described above, and details of implementation can be referred to the description in the method embodiment.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present invention, where the image display device 30 includes: a human body image acquisition module 301, a human body depth acquisition module 302, a human body model building module 303, a synthesis module 304 and a display module 305;
the human body image acquisition module 301 is used for acquiring at least two human body images through the double cameras of the terminal;
a human body depth obtaining module 302, configured to obtain depth information of a human body according to at least two human body images;
a human body model building module 303, configured to build a three-dimensional image model of a human body according to depth information of the human body;
a synthesizing module 304, configured to synthesize the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
and a display module 305 for displaying the target three-dimensional image model.
In an embodiment, as shown in fig. 7, in the image display device 30, the depth information obtaining module 302 includes: a distance acquisition sub-module 3021, a parallax generation sub-module 3022, and a depth calculator sub-module 3023;
the distance acquisition submodule 3021 is configured to acquire a distance between two camera modules of the dual cameras;
a parallax generation submodule 3022 configured to generate parallax information between at least two human body images according to the distance;
and the depth calculating operator module 3023 is configured to calculate depth information of the human body according to the parallax information.
In one embodiment, as shown in fig. 8, the image display device 30 may further include: a clothing image acquisition module 306, a clothing depth acquisition module 307 and a clothing model establishing module 308;
a clothing image acquisition module 306, configured to obtain at least two virtual clothing images of the target virtual clothing before the synthesis module 304 synthesizes the three-dimensional image model of the target virtual clothing with the three-dimensional image model of the human body;
a clothing depth obtaining module 307, configured to obtain depth information of the clothing according to the at least two virtual clothing images;
and a clothing model establishing module 308 for establishing a three-dimensional image model of the target virtual clothing according to the clothing depth information.
In one embodiment, the synthesis module 304 includes: a first information acquisition submodule, a second information determination submodule and a synthesis submodule;
the first information acquisition submodule is used for acquiring the position information of a first feature point in a three-dimensional image model of the target virtual clothes;
the second information determining submodule is used for determining corresponding second characteristic point position information in the three-dimensional image model of the human body according to the first characteristic point position information;
and the synthesis submodule is used for synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body according to the position information of the first characteristic point and the position information of the second characteristic point.
In one embodiment, the image display device 30 further includes: a selection module and a processing module;
the selecting module is used for selecting the image with the smallest size among the at least two human body images as a reference image after the human body image acquiring module 301 acquires the at least two human body images through the double cameras of the terminal;
and the processing module is used for compressing other human body images according to the size of the reference image so as to enable at least two human body images to be the same in size.
In one embodiment, the image display device 30 further includes: the device comprises an adjustment instruction receiving module and an adjustment module;
an adjustment instruction receiving module, configured to receive a clothing parameter adjustment instruction input by a user after the display module 305 displays the target three-dimensional image model;
the adjusting module is used for adjusting the target three-dimensional image model according to the clothing adjusting instruction;
the display module 305 is further configured to display the adjusted target three-dimensional image model.
In one embodiment, the image display device 30 further includes: the query instruction receiving module and the query module;
a query instruction receiving module, configured to receive a query instruction of a user for a target virtual garment after the display module 305 displays the target three-dimensional image model, where the query instruction carries attribute information of the target virtual garment;
and the query module is used for querying the price of the target clothes according to the attribute information.
As can be seen from the above, the image display device provided in the embodiment of the present invention may acquire at least two human body images through the two cameras of the terminal, acquire depth information of a human body according to the at least two human body images, establish a three-dimensional image model of the human body according to the depth information of the human body, synthesize the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body to obtain a target three-dimensional image model, and display the target three-dimensional image model. The terminal based on the double cameras can establish the three-dimensional image model of the human body of the user, and is synthesized with the three-dimensional image model of the clothes, so that the virtual fitting is carried out, and the real fitting effect can be obtained.
The present invention also provides a storage medium having stored thereon instructions that are loaded by a processor to perform the steps of:
acquiring at least two human body images through two cameras of a terminal;
acquiring depth information of the human body according to the at least two human body images;
establishing a three-dimensional image model of the human body according to the depth information of the human body;
synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
and displaying the target three-dimensional image model.
An embodiment of the present invention further provides a terminal, where the terminal may be a smart phone, a tablet computer, and the like, and as shown in fig. 9, the terminal 400 includes a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the terminal 400, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or loading an application stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the terminal.
In this embodiment, the processor 401 in the terminal 400 loads instructions corresponding to one or more processes of an application program into the memory 402 according to the following steps, and the processor 401 runs the application program stored in the memory 402, so as to implement the following functions:
acquiring at least two human body images through two cameras of a terminal;
acquiring depth information of the human body according to the at least two human body images;
establishing a three-dimensional image model of the human body according to the depth information of the human body;
synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
and displaying the target three-dimensional image model.
In an embodiment, when obtaining the depth information of the human body according to the at least two human body images, the processor 401 is configured to perform the following steps:
acquiring the distance between two camera modules of the double cameras;
generating parallax information between the at least two human body images according to the distance;
and calculating the depth information of the human body according to the parallax information.
In an embodiment, before the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body are synthesized, the processor 401 is further configured to perform the following steps:
acquiring at least two virtual clothes images of the target virtual clothes;
and acquiring the depth information of the clothes according to the at least two virtual clothes images, and establishing a three-dimensional image model of the target virtual clothes according to the clothes depth information.
In one embodiment, when synthesizing the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body, the processor 401 is configured to perform the following steps:
acquiring position information of a first feature point in a three-dimensional image model of the target virtual clothes;
determining corresponding second feature point position information in the three-dimensional image model of the human body according to the first feature point position information;
and synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body according to the position information of the first characteristic point and the position information of the second characteristic point.
In an embodiment, after acquiring at least two human body images of the user through the two cameras of the terminal, the processor 401 is further configured to perform the following steps:
selecting the image with the smallest size from the at least two human body images as a reference image;
and compressing other human body images according to the size of the reference image so as to enable the sizes of the at least two human body images to be the same.
In an embodiment, please refer to fig. 10, where fig. 10 is a schematic diagram of a terminal structure according to an embodiment of the present invention. The terminal 500 may include Radio Frequency (RF) circuitry 501, memory 502 including one or more computer-readable storage media, input unit 503, display unit 504, sensor 504, audio circuitry 506, Wireless Fidelity (WiFi) module 507, processor 508 including one or more processing cores, and power supply 509. Those skilled in the art will appreciate that the terminal structure shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The rf circuit 501 may be used for receiving and transmitting information, or receiving and transmitting signals during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to one or more processors 508 for processing; in addition, data relating to uplink is transmitted to the base station. In general, radio frequency circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
The memory 502 may be used to store applications and data. Memory 502 stores applications containing executable code. The application programs may constitute various functional modules. The processor 508 executes various functional applications and data processing by executing application programs stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc.
The input unit 503 may be used to receive input numbers, character information, or user characteristic information (such as a fingerprint), and generate a keyboard, mouse, joystick, optical, or trackball signal input related to user setting and function control. In particular, in one particular embodiment, the input unit 503 may include a touch-sensitive surface as well as other input devices.
The display unit 504 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The display unit 504 may include a display panel. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The terminal may also include at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
The audio circuit 506 may provide an audio interface between the user and the terminal through a speaker, microphone. The audio circuit 506 can convert the received audio data into an electrical signal, transmit the electrical signal to a speaker, and convert the electrical signal into a sound signal to output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted into audio data, which is then processed by the audio data output processor 508 and then sent to, for example, another terminal via the rf circuit 501, or the audio data is output to the memory 502 for further processing.
Wireless fidelity (WiFi) belongs to short-distance wireless transmission technology, and the terminal can help the user to receive and send e-mail, browse web pages, access streaming media and the like through a wireless fidelity module 507, and provides wireless broadband internet access for the user. Although fig. 10 shows the wireless fidelity module 507, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 508 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by running or executing an application program stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring of the terminal. Optionally, processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 508.
The terminal also includes a power supply 509 (such as a battery) for powering the various components. Preferably, the power source may be logically connected to the processor 508 through a power management system, so that the power management system may manage charging, discharging, and power consumption management functions. The power supply 509 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 10, the terminal may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
The processor 508 is also configured to implement the following functions: the method comprises the steps of collecting at least two human body images through two cameras of a terminal, obtaining depth information of a human body according to the at least two human body images, establishing a three-dimensional image model of the human body according to the depth information of the human body, synthesizing the three-dimensional image model of a target virtual garment and the three-dimensional image model of the human body to obtain a target three-dimensional image model, and displaying the target three-dimensional image model.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It should be noted that, as one of ordinary skill in the art would understand, all or part of the steps in the various methods of the above embodiments may be implemented by relevant hardware instructed by a program, where the program may be stored in a computer-readable storage medium, such as a memory of a terminal, and executed by at least one processor in the terminal, and during the execution, the flow of the embodiments such as the information distribution method may be included. Among others, the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
In the foregoing, detailed descriptions are given to the image display method, the image display apparatus, the storage medium, and the terminal according to the embodiments of the present invention, where each functional module may be integrated into one processing chip, each functional module may exist alone physically, or two or more functional modules may be integrated into one functional module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (20)
- An image display method, comprising:acquiring at least two human body images through two cameras of a terminal;acquiring depth information of the human body according to the at least two human body images;establishing a three-dimensional image model of the human body according to the depth information of the human body;synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;and displaying the target three-dimensional image model.
- The image display method of claim 1, wherein the step of obtaining depth information of the human body from the at least two human body images comprises:acquiring the distance between two camera modules of the double cameras;generating parallax information between the at least two human body images according to the distance;and calculating the depth information of the human body according to the parallax information.
- The image display method of claim 1, wherein prior to synthesizing the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body, the method further comprises:acquiring at least two virtual clothes images of the target virtual clothes;and acquiring the depth information of the clothes according to the at least two virtual clothes images, and establishing a three-dimensional image model of the target virtual clothes according to the clothes depth information.
- The image display method of claim 1, wherein the step of synthesizing the three-dimensional image model of the target virtual garment and the three-dimensional image model of the human body comprises:acquiring position information of a first feature point in a three-dimensional image model of the target virtual clothes;determining corresponding second feature point position information in the three-dimensional image model of the human body according to the first feature point position information;and synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body according to the position information of the first characteristic point and the position information of the second characteristic point.
- The image display method of claim 1, wherein after acquiring at least two human body images of the user through the dual cameras of the terminal, the method further comprises:selecting the image with the smallest size from the at least two human body images as a reference image;and compressing other human body images according to the size of the reference image so as to enable the sizes of the at least two human body images to be the same.
- The image display method as claimed in claim 1, wherein after displaying the target three-dimensional image model, the method further comprises:receiving a clothing parameter adjusting instruction input by a user;adjusting the target three-dimensional image model according to the clothing adjusting instruction;and displaying the adjusted target three-dimensional image model.
- The image display method as claimed in claim 1, wherein after displaying the target three-dimensional image model, the method further comprises:receiving a query instruction of a user for the target virtual clothing, wherein the query instruction carries attribute information of the target virtual clothing;and inquiring the price of the target clothes according to the attribute information.
- An image display apparatus, comprising: the human body image acquisition module, the human body depth acquisition module, the human body model building module, the synthesis module and the display module;the human body image acquisition module is used for acquiring at least two human body images through the double cameras of the terminal;the human body depth acquisition module is used for acquiring the depth information of the human body according to the at least two human body images;the human body model establishing module is used for establishing a three-dimensional image model of the human body according to the depth information of the human body;the synthesis module is used for synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;and the display module is used for displaying the target three-dimensional image model.
- The image display apparatus of claim 8, wherein the depth information acquisition module comprises: the device comprises a distance acquisition sub-module, a parallax generation sub-module and a depth meter operator module;the distance acquisition submodule is used for acquiring the distance between two camera modules of the double cameras;the parallax generation submodule is used for generating parallax information between the at least two human body images according to the distance;and the depth calculation submodule is used for calculating the depth information of the human body according to the parallax information.
- The image display apparatus of claim 8, wherein the apparatus further comprises: the system comprises a clothing image acquisition module, a clothing depth acquisition module and a clothing model establishing module;the clothing image acquisition module is used for acquiring at least two virtual clothing images of the target virtual clothing before the synthesis module synthesizes the three-dimensional image model of the target virtual clothing with the three-dimensional image model of the human body;the clothing depth obtaining module is used for obtaining the depth information of the clothing according to the at least two virtual clothing images;the clothing model establishing module is used for establishing a three-dimensional image model of the target virtual clothing according to the clothing depth information.
- The image display device of claim 8, wherein the compositing module comprises: a first information acquisition submodule, a second information determination submodule and a synthesis submodule;the first information acquisition submodule is used for acquiring position information of a first feature point in a three-dimensional image model of the target virtual clothes;the second information determining submodule is used for determining corresponding second characteristic point position information in the three-dimensional image model of the human body according to the first characteristic point position information;and the synthesis submodule is used for synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body according to the position information of the first characteristic point and the position information of the second characteristic point.
- The image display apparatus of claim 8, wherein the apparatus further comprises: a selection module and a processing module;the selecting module is used for selecting the image with the smallest size among the at least two human body images as a reference image after the human body image acquiring module acquires the at least two human body images through the double cameras of the terminal;and the processing module is used for compressing other human body images according to the size of the reference image so as to enable the sizes of the at least two human body images to be the same.
- The image display apparatus of claim 8, wherein the apparatus further comprises: the device comprises an adjustment instruction receiving module and an adjustment module;the adjusting instruction receiving module is used for receiving a clothing parameter adjusting instruction input by a user after the display module displays the target three-dimensional image model;the adjusting module is used for adjusting the target three-dimensional image model according to the clothing adjusting instruction;and the display module is also used for displaying the adjusted target three-dimensional image model.
- The image display apparatus of claim 8, wherein the apparatus further comprises: the query instruction receiving module and the query module;the query instruction receiving module is used for receiving a query instruction of a user for the target virtual clothing after the display module displays the target three-dimensional image model, wherein the query instruction carries attribute information of the target virtual clothing;and the query module is used for querying the price of the target clothes according to the attribute information.
- A storage medium, wherein the storage medium stores instructions that are loaded by a processor to perform the steps of:acquiring at least two human body images through two cameras of a terminal;acquiring depth information of the human body according to the at least two human body images;establishing a three-dimensional image model of the human body according to the depth information of the human body;synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;and displaying the target three-dimensional image model.
- A terminal comprising a memory and a processor, the memory storing instructions, the processor loading the instructions to perform the steps of:acquiring at least two human body images through two cameras of a terminal;acquiring depth information of the human body according to the at least two human body images;establishing a three-dimensional image model of the human body according to the depth information of the human body;synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body to obtain a target three-dimensional image model;and displaying the target three-dimensional image model.
- The terminal of claim 16, wherein when obtaining the depth information of the human body according to the at least two human body images, the processor is configured to perform the following steps:acquiring the distance between two camera modules of the double cameras;generating parallax information between the at least two human body images according to the distance;and calculating the depth information of the human body according to the parallax information.
- The terminal of claim 16, wherein prior to synthesizing the three-dimensional image model of the target virtual apparel and the three-dimensional image model of the human body, the processor is further configured to perform the steps of:acquiring at least two virtual clothes images of the target virtual clothes;and acquiring the depth information of the clothes according to the at least two virtual clothes images, and establishing a three-dimensional image model of the target virtual clothes according to the clothes depth information.
- The terminal of claim 16, wherein in synthesizing the three-dimensional image model of the target virtual apparel and the three-dimensional image model of the human body, the processor is configured to perform the steps of:acquiring position information of a first feature point in a three-dimensional image model of the target virtual clothes;determining corresponding second feature point position information in the three-dimensional image model of the human body according to the first feature point position information;and synthesizing the three-dimensional image model of the target virtual clothes and the three-dimensional image model of the human body according to the position information of the first characteristic point and the position information of the second characteristic point.
- The terminal of claim 16, wherein after acquiring at least two human images of the user through the dual cameras of the terminal, the processor is further configured to:selecting the image with the smallest size from the at least two human body images as a reference image;and compressing other human body images according to the size of the reference image so as to enable the sizes of the at least two human body images to be the same.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/091369 WO2019000464A1 (en) | 2017-06-30 | 2017-06-30 | Image display method and device, storage medium, and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110622218A true CN110622218A (en) | 2019-12-27 |
Family
ID=64742782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780090737.9A Pending CN110622218A (en) | 2017-06-30 | 2017-06-30 | Image display method, device, storage medium and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110622218A (en) |
WO (1) | WO2019000464A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563850A (en) * | 2020-03-20 | 2020-08-21 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111709874A (en) * | 2020-06-16 | 2020-09-25 | 北京百度网讯科技有限公司 | Image adjusting method and device, electronic equipment and storage medium |
CN115883814A (en) * | 2023-02-23 | 2023-03-31 | 阿里巴巴(中国)有限公司 | Method, device and equipment for playing real-time video stream |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110264393B (en) * | 2019-05-15 | 2023-06-23 | 联想(上海)信息技术有限公司 | Information processing method, terminal and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156810A (en) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | Augmented reality real-time virtual fitting system and method thereof |
CN102682211A (en) * | 2012-05-09 | 2012-09-19 | 晨星软件研发(深圳)有限公司 | Three-dimensional fitting method and device |
CN103871099A (en) * | 2014-03-24 | 2014-06-18 | 惠州Tcl移动通信有限公司 | 3D simulation matching processing method and system based on mobile terminal |
CN104952113A (en) * | 2015-07-08 | 2015-09-30 | 北京理工大学 | Dress fitting experience method, system and equipment |
WO2015167039A1 (en) * | 2014-04-28 | 2015-11-05 | (주)에프엑스기어 | Apparatus and method for generating virtual clothes for augmented reality-based virtual fitting |
CN106815825A (en) * | 2016-12-09 | 2017-06-09 | 深圳市元征科技股份有限公司 | One kind fitting method for information display and display device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10628729B2 (en) * | 2010-06-08 | 2020-04-21 | Styku, LLC | System and method for body scanning and avatar creation |
CN102956004A (en) * | 2011-08-25 | 2013-03-06 | 鸿富锦精密工业(深圳)有限公司 | Virtual fitting system and method |
CN106408613A (en) * | 2016-09-18 | 2017-02-15 | 合肥视尔信息科技有限公司 | Stereoscopic vision building method suitable for virtual lawsuit advisor |
-
2017
- 2017-06-30 WO PCT/CN2017/091369 patent/WO2019000464A1/en active Application Filing
- 2017-06-30 CN CN201780090737.9A patent/CN110622218A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156810A (en) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | Augmented reality real-time virtual fitting system and method thereof |
CN102682211A (en) * | 2012-05-09 | 2012-09-19 | 晨星软件研发(深圳)有限公司 | Three-dimensional fitting method and device |
CN103871099A (en) * | 2014-03-24 | 2014-06-18 | 惠州Tcl移动通信有限公司 | 3D simulation matching processing method and system based on mobile terminal |
WO2015167039A1 (en) * | 2014-04-28 | 2015-11-05 | (주)에프엑스기어 | Apparatus and method for generating virtual clothes for augmented reality-based virtual fitting |
CN104952113A (en) * | 2015-07-08 | 2015-09-30 | 北京理工大学 | Dress fitting experience method, system and equipment |
CN106815825A (en) * | 2016-12-09 | 2017-06-09 | 深圳市元征科技股份有限公司 | One kind fitting method for information display and display device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563850A (en) * | 2020-03-20 | 2020-08-21 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111563850B (en) * | 2020-03-20 | 2023-12-05 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111709874A (en) * | 2020-06-16 | 2020-09-25 | 北京百度网讯科技有限公司 | Image adjusting method and device, electronic equipment and storage medium |
CN111709874B (en) * | 2020-06-16 | 2023-09-08 | 北京百度网讯科技有限公司 | Image adjustment method, device, electronic equipment and storage medium |
CN115883814A (en) * | 2023-02-23 | 2023-03-31 | 阿里巴巴(中国)有限公司 | Method, device and equipment for playing real-time video stream |
Also Published As
Publication number | Publication date |
---|---|
WO2019000464A1 (en) | 2019-01-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110348543B (en) | Fundus image recognition method and device, computer equipment and storage medium | |
CN108594997B (en) | Gesture skeleton construction method, device, equipment and storage medium | |
CN109947886B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110807361B (en) | Human body identification method, device, computer equipment and storage medium | |
CN108525305B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN111476306A (en) | Object detection method, device, equipment and storage medium based on artificial intelligence | |
CN107844781A (en) | Face character recognition methods and device, electronic equipment and storage medium | |
CN110570460B (en) | Target tracking method, device, computer equipment and computer readable storage medium | |
CN110599593B (en) | Data synthesis method, device, equipment and storage medium | |
CN112287852B (en) | Face image processing method, face image display method, face image processing device and face image display equipment | |
CN112991494B (en) | Image generation method, device, computer equipment and computer readable storage medium | |
CN111489378A (en) | Video frame feature extraction method and device, computer equipment and storage medium | |
CN112257552B (en) | Image processing method, device, equipment and storage medium | |
CN111368116B (en) | Image classification method and device, computer equipment and storage medium | |
CN110622218A (en) | Image display method, device, storage medium and terminal | |
CN110765525B (en) | Method, device, electronic equipment and medium for generating scene picture | |
CN111738914A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN113409468B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112766406B (en) | Method and device for processing object image, computer equipment and storage medium | |
CN110675412A (en) | Image segmentation method, training method, device and equipment of image segmentation model | |
CN113538321B (en) | Vision-based volume measurement method and terminal equipment | |
CN112581358A (en) | Training method of image processing model, image processing method and device | |
CN110705438A (en) | Gait recognition method, device, equipment and storage medium | |
CN110807769B (en) | Image display control method and device | |
CN113987326B (en) | Resource recommendation method and device, computer equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191227 |
|
RJ01 | Rejection of invention patent application after publication |