CN113450448A - Image processing method, device and system - Google Patents

Image processing method, device and system Download PDF

Info

Publication number
CN113450448A
CN113450448A CN202010219156.4A CN202010219156A CN113450448A CN 113450448 A CN113450448 A CN 113450448A CN 202010219156 A CN202010219156 A CN 202010219156A CN 113450448 A CN113450448 A CN 113450448A
Authority
CN
China
Prior art keywords
dimensional model
image information
rendering
feature identifier
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010219156.4A
Other languages
Chinese (zh)
Inventor
杜稼淳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010219156.4A priority Critical patent/CN113450448A/en
Publication of CN113450448A publication Critical patent/CN113450448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Abstract

The invention discloses a method, a device and a system for processing an image. Wherein, the method comprises the following steps: acquiring image information in a scene, wherein the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier; identifying a feature identification on the wearable device; acquiring the position of the feature identifier in the image information and a three-dimensional model corresponding to the object; and calibrating the three-dimensional model according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model. The method solves the technical problem that the display effect is poor due to the fact that drift is easy to generate in the process of rendering the digital twin action model in the prior art.

Description

Image processing method, device and system
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, and a system for processing an image.
Background
The digital twin concept and the augmented reality technology are slowly popularized, and can provide support for professional fields such as demonstration type projects, intelligent medical treatment/sports and the like. The current technology related to augmented reality equipment and wearable equipment is difficult to be used for recognizing human body actions. The movie and television industry has a green screen technology for collecting actor actions and converting the actor actions into 3D special effects so as to realize the identification of user actions, but the green screen technology and the like can only be used in specific and similar laboratory environments and cannot be reused in professional scenes. Can realize human action discernment under the environment of non-experiment through the module that has the degree of depth discernment camera, nevertheless external camera based on computer vision receives the environmental impact great, appears drift scheduling problem easily in the show type project.
Fig. 1 is a schematic diagram showing drift generated when a digital twin motion model is displayed, and as shown in fig. 1, in an initial state, a certain deviation exists between matching of the model and a user, when the user moves, due to time delay generated by processes such as rendering, the model and the user are disjointed, namely drift is generated, and when the user is fixed, the model keeps up and is consistent.
Aiming at the problem that the display effect is poor due to the fact that drift is easy to generate in the process of rendering a digital twin action model in the prior art, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides an image processing method, device and system, which are used for at least solving the technical problem of poor display effect caused by drift easily generated in the process of rendering a digital twin motion model in the prior art.
According to an aspect of the embodiments of the present invention, there is provided an image processing method, including: acquiring image information in a scene, wherein the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier; identifying a feature identification on the wearable device; acquiring the position of the feature identifier in the image information and a three-dimensional model corresponding to the object; and calibrating the three-dimensional model according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model.
According to another aspect of the embodiments of the present invention, there is also provided an image processing method, including: sending a model display request to a rendering processor, wherein the rendering processor acquires image information in a scene, the image information at least comprises wearable equipment and an object carrying the wearable equipment, identifies a feature identifier on the wearable equipment, acquires the position of the feature identifier in the image information, and calibrates a three-dimensional model corresponding to the object according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model; and receiving and displaying a rendering result of the rendering processor for rendering the three-dimensional model.
According to another aspect of the embodiments of the present invention, there is also provided an image processing system, including: the wearable device is provided with a characteristic mark; the image acquisition device is used for acquiring image information in a scene and sending the image information to the rendering processor, wherein the image information at least comprises the wearable equipment and an object carrying the wearable equipment; the rendering processor is used for identifying the feature identifier on the wearable device, acquiring the position of the feature identifier in the image information, and calibrating the three-dimensional model according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model corresponding to the object; and the display equipment is used for displaying a rendering result of rendering the three-dimensional model.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring image information in a scene, the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier; the identification module is used for identifying the characteristic mark on the wearable device; the second acquisition module is used for acquiring the position of the feature identifier in the image information and the three-dimensional model corresponding to the object; and the calibration module is used for calibrating the three-dimensional model according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the above-mentioned image processing method.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the method for processing the image when running
In the embodiment of the invention, the characteristic identifier is arranged on the wearable device, and the three-dimensional model is calibrated according to the position of the characteristic identifier in the image information in the process of rendering the three-dimensional model, so that the wearable object is not required to be matched with the three-dimensional model by positioning parameters such as bones and contours of the wearable object in the image, the wearable object can be matched with the three-dimensional model by directly using the characteristic identifier, the matching efficiency is improved, the time required by operation is reduced, the time delay for displaying the three-dimensional model is further reduced, the drifting effect caused by overlong time delay in the rendering process is weakened, and the technical problem of poor display effect caused by the fact that the drifting is easily generated in the rendering process of the digital twin motion model in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram showing drift in a digital twinning event model;
fig. 2 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing a processing method of an image;
FIG. 3 is a flowchart of a method for processing an image according to embodiment 1 of the present application;
FIG. 4 is a schematic diagram of reducing rendering accuracy according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for processing an image according to embodiment 2 of the present application;
FIG. 6 is a schematic diagram of an image processing system according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an image processing apparatus according to embodiment 4 of the present application;
FIG. 8 is a schematic diagram of an image processing apparatus according to embodiment 5 of the present application; and
fig. 9 is a block diagram of a computer terminal according to embodiment 6 of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
digital twinning: the method fully utilizes data such as a physical model, sensor updating, operation history and the like, integrates a multidisciplinary, multi-physical quantity, multi-scale and multi-probability simulation process, completes mapping in a virtual space, and constructs a virtual entity capable of accurately reflecting the state of physical equipment, so that the full life cycle process of corresponding entity equipment is reflected.
Augmented reality: the Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and the real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, so that the real world is enhanced.
Wearable parts: a wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user.
Example 1
There is also provided, in accordance with an embodiment of the present invention, an embodiment of a method of processing an image, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 2 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing a processing method of an image. As shown in fig. 2, the computer terminal 20 (or mobile device 20) may include one or more (shown as 202a, 202b, … …, 202 n) processors 202 (the processors 202 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 204 for storing data, and a transmission module 206 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 2 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 20 may also include more or fewer components than shown in FIG. 2, or have a different configuration than shown in FIG. 2.
It should be noted that the one or more processors 202 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 20 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 204 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the image processing method in the embodiment of the present invention, and the processor 202 executes various functional applications and data processing by running the software programs and modules stored in the memory 204, that is, implementing the image processing method described above. Memory 204 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 204 may further include memory located remotely from the processor 202, which may be connected to the computer terminal 20 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 206 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 20. In one example, the transmission device 206 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 206 can be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 20 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 2 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 2 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
Under the above operating environment, the present application provides a method for processing an image as shown in fig. 3. Fig. 3 is a flowchart of an image processing method according to embodiment 1 of the present application.
Step S31, acquiring image information in the scene, where the image information at least includes the wearable device and the object carrying the wearable device, and the wearable device has the feature identifier.
Specifically, the image information in the scene may be 2D image information acquired by using a common camera. The object carrying the wearable device can be a bionic device such as a person, an animal or a robot. For example, the person carrying the wearable device may be a presenter at a conference, a teacher in a remote classroom, a speaker of a remote conference, etc.; the animal carrying the wearable device can be a test animal subjected to medical observation, a rare animal injured and needing treatment, and the like, and the bionic device carrying the wearable device can be a bionic robot, a bionic animal, a bionic plant, and the like.
In an alternative embodiment, taking the scenario of the application in the release meeting as an example, the release meeting can be viewed through live viewing or live webcasting. Under the condition of on-site watching, on-site audiences can acquire image information in a scene through a portable terminal device; under the condition of live broadcast watching through a network, image information in a scene can be collected through a camera arranged on a release meeting site. The cameras deployed on site may be multiple to capture image information in a scene from different angles.
Above-mentioned wearing equipment can be equipment such as bracelet, glasses, and the object of wearing this wearing equipment can be the user who wears the bracelet or dress glasses. In the scene of a release meeting, a presenter can wear a bracelet to demonstrate a product on a main desk. The image information obtained by the camera at least comprises a hand ring worn by a presenter. It should be noted that, in order to be able to detect the movements of the two arms of the presenter, the presenter may wear a bracelet on both arms.
The wearing device can also be a helmet, a neck ring and other devices, and the object wearing the wearing device can be an animal wearing the helmet or the neck ring. In the context of scientific observation of animals, the observed animal may wear a helmet or collar for voluntary activities. The image information acquired by the camera at least comprises a helmet or a neck ring worn by the observed animal.
The object wearing the wearable device can also be bionic equipment. Taking the robot as an example, the robot can wear wearing devices such as a bracelet and a helmet to execute a preset instruction, and the acquired image information at least comprises the robot wearing the wearing device and the wearing devices such as the bracelet and the helmet worn by the robot.
The feature identifier on the wearable device may be a feature color point or a feature color block arranged on the surface of the wearable device, the feature identifier is allowed to be seen by a user, or may be hidden by a user through special processing, as long as the image acquisition device can extract the feature identifier from the surface of the wearable device.
And step S33, identifying the feature identifier on the wearable device.
After the image acquisition device acquires the image information in the scene, the feature identification in the image information is identified. The feature information of the feature identifier may be pre-stored, for example: the shape of the characteristic mark, the color of the characteristic mark, the type of the characteristic mark and the like, and then the characteristic mark is identified according to the characteristic information of the characteristic mark.
In an optional embodiment, the image acquisition device sends the image information to a cloud processor, and the cloud processor identifies the feature identifier of the surface of the wearing device. For example, the cloud processor prestores the type of the feature identifier as a two-dimensional code with a specific size, and when the two-dimensional code with the specific size is recognized from the image information, the feature information is determined to be recognized. For another example, the cloud processor stores color bands composed of a plurality of colors of the feature identifiers in advance, and determines that the feature information is recognized when the color bands composed of the plurality of colors are recognized from the image information.
Step S35, a three-dimensional model corresponding to the position of the feature identifier in the image information and the object is obtained.
The position of the above-mentioned feature identifier in the image information may be represented by a coordinate parameter. After the feature identifier is identified in the acquired image information, the coordinate parameter of the feature identifier can be determined according to the coordinate system in the image information. The three-dimensional model may be a preset general model or a three-dimensional model corresponding to the wearing object.
Step S37, calibrating the three-dimensional model according to the position of the feature identifier in the image information during rendering the three-dimensional model.
The three-dimensional model is rendered, the mapping of the wearing object can be completed in a virtual space, the three-dimensional model is displayed in the watching equipment of a user, the watching equipment can be augmented reality equipment or mobile terminal equipment with an augmented reality function, and the three-dimensional model (virtual entity) capable of accurately reflecting the wearing object can be watched through the watching equipment. In the step, the three-dimensional model is calibrated according to the position of the identification information in the image information in the process of rendering the three-dimensional model.
The calibration is used for finding an object to be simulated, namely a wearing object, from the image information, matching the wearing object with the three-dimensional model in the virtual reality, and displaying the corresponding three-dimensional model based on the action of the wearing object after the matching of the wearing object and the three-dimensional model is completed.
In the case that identification information does not exist, in order to render the three-dimensional model, parameters such as bones and contours of the wearing object need to be located, so that the wearing object can be found from the image information, and the wearing object in the image information is matched with the wearing object in the three-dimensional model, so that the wearing object can be mapped in a digital twin manner. In the above scheme, since the position of the feature identifier in the image information is determined, and the position of the wearing device where the feature identifier is located on the wearing object is determined (for example, the bracelet is certainly worn on the wrist of the wearing object, the glasses are certainly worn on the face of the wearing object, and the like), matching between the wearing object and the three-dimensional model is not required to be performed by positioning parameters such as the skeleton, the outline, and the like of the wearing object in the image, so that matching between the wearing object and the three-dimensional model can be completed directly through the feature identifier, matching efficiency is improved, time required for operation is reduced, delay for displaying the three-dimensional model is reduced, and a drift effect caused by overlong delay in a rendering process is weakened.
According to the embodiment of the application, the characteristic identification is arranged on the wearable device, the three-dimensional model is calibrated according to the position of the characteristic identification in the image information in the process of rendering the three-dimensional model, the wearable object is not required to be matched with the three-dimensional model through parameters such as bones and outlines of the wearable object positioned in the image, the wearable object can be matched with the three-dimensional model directly through the characteristic identification, the matching efficiency is improved, the time required by operation is shortened, the time delay for displaying the three-dimensional model is further reduced, the drifting effect caused by overlong time delay in the rendering process is weakened, and the technical problem that the display effect is poor due to the fact that the drift is easily generated in the rendering process of a digital twin motion model in the prior art is solved.
As an alternative embodiment, in the process of rendering the three-dimensional model, calibrating the three-dimensional model according to the position of the feature identifier in the image information includes: identifying an object in the image information according to the position of the feature identifier in the image information; and rendering the three-dimensional model based on the identified object by taking the characteristic identifier as an origin in a world coordinate system.
In the case of determining the position of the feature identifier in the image information, since the position where the wearing device is worn by the subject is known in advance (for example, if the wearing device is a bracelet, the worn position is a wrist, if the wearing device is glasses, the worn position is a face, etc.), in combination with the position of the feature identifier in the image information, the position where the wearing device is worn by the subject can be determined, and thus the position of the entire subject in the image information can be determined.
The world coordinate system described above is used to represent the absolute coordinate system of the system when the three-dimensional model is presented. In the above solution, the feature identifier identified from the image information is used as the origin in the world coordinate system, and then based on the object identified from the image information, a three-dimensional model of the object can be rendered in the world coordinate system.
Fig. 4 is a schematic diagram of reducing rendering accuracy according to an embodiment of the present application, and as shown in fig. 1 and fig. 4, thicker lines are used for representing an object itself, thinner lines are used for representing imaging of a three-dimensional model (if only thicker lines exist, it is indicated that the object itself is consistent with the imaging of the three-dimensional model), the object in fig. 1 is not worn with a smart bracelet, and in an initial state of fig. 1, the object in the image information is determined by identifying each position of the image information, which is not only large in calculation amount and long in time consumption, but also has a certain deviation in an identification result. And in fig. 4, the presenter wears the bracelet at the wrist, and during the initial state, the characteristic identification on the bracelet is used for calibration, so that the three-dimensional model and the presenter can be quickly matched, and the matching degree is very high.
As an alternative embodiment, the feature identifiers are color feature points of the surface of the wearable device.
In the above scheme, the feature marks are color feature points of the surface of the wearable device. The color feature point can be seen by a user or hidden by the user and can only be recognized by the image acquisition device. The color characteristic points are not limited to be point-shaped, the color characteristic points can also be in the shape of a strip or other shapes, and the main image acquisition device can be identified from the surface of the wearable device.
The position of colour characteristic point on wearing equipment surface can be towards the position of outside after the wearing equipment is worn to the object, for example, if wearing equipment is the bracelet, the colour characteristic point can set up in the bracelet towards the one side of outside, and the user wears the back that corresponds the wrist after to can not sheltered from, enable image acquisition device and carry out fine collection. The wearable device can be further provided with prompt information for prompting the wearing mode of the object on the wearable device, so that the characteristic identification cannot be shielded after the object wears the wearable device.
As an alternative embodiment, the difference between the gray value of the color feature point and the gray value of the background color of the background where the wearable device is located is within a preset gray difference range.
The background in which the wearable device is located may be the scene in which the subject is located. Still take the release meeting as an example, the background where the wearable device is located is the background of the release meeting, the background color may be determined by the light of the release meeting site, when the light is dark, the background color may be black, and when the light is bright, the background color may be the actual color of the background.
The difference between the gray value of the color characteristic point and the gray value of the background color of the background where the wearable device is located is set to be within a preset gray difference range, so that the color characteristic point on the surface of the wearable device can be hidden in the scene as much as possible, a larger color difference with the background cannot be formed to influence the display effect in the scene, and the purposes of not being discovered by a viewer and improving the display effect in the scene are achieved.
In an optional embodiment, the preset gray difference range may be a value smaller than 0.5, for example, may be 0.4, and the difference between the gray value of the color feature point and the gray value of the background color after normalization processing is obtained after normalization processing is performed on both the gray value of the color feature point and the gray value of the background color of the background where the wearable device is located is determined to be within the preset gray difference range if the difference is smaller than 0.4.
According to the embodiment of the application, the characteristic mark is arranged on the surface of the wearable device, so that the object in the image information can be quickly positioned when the three-dimensional model is rendered, the time delay for displaying the three-dimensional model is reduced, the drifting effect caused by overlong time delay in the rendering process is weakened, but the drifting effect is difficult to completely eliminate by the scheme, and a slight drifting phenomenon also exists. Thus, for the remaining slight drift effect, the following protocol is further processed.
In an optional embodiment, the wearable device includes an acceleration sensor, and during the rendering of the three-dimensional model, the method further includes: acquiring acceleration data detected by an acceleration sensor; comparing the acceleration data with a preset acceleration threshold; and if the acceleration data is larger than the acceleration threshold value, reducing the rendering precision in the rendering process.
The drift effect refers to a phenomenon that the three-dimensional model and the actual object are disjointed when the moving speed of the three-dimensional model is difficult to follow the moving speed of the actual object due to time delay in the processes of rendering and the like. When the moving speed is low, the possibility of occurrence of a drift effect is low, and when the moving speed of the object is too high, the drift effect is more likely to occur, so that the detected acceleration data is compared with the preset acceleration preset by the scheme, and when the detected acceleration data is greater than the acceleration threshold, the further processing is performed.
In an optional embodiment, still wear the bracelet with the demo when publishing meeting as an example, have acceleration sensor on the bracelet, the acceleration data of real-time detection demo's wrist to real-time server with acceleration data transmission to high in the clouds, when demo's arbitrary action drives the wrist and makes the acceleration data that acceleration sensor detected be greater than the acceleration threshold value that predetermines, the server in high in the clouds reduces the precision of rendering at the in-process of rendering three-dimensional model.
Reducing the rendering accuracy can reduce the data processing amount during rendering, and further reduce the time delay excessively generated during rendering, so that the drift effect of the three-dimensional model can be further weakened.
As an alternative embodiment, if the acceleration data is greater than the acceleration threshold, reducing the rendering accuracy in the rendering process includes: rendering the three-dimensional model to be dotted, wherein the acceleration data and the dotted point cloud scattering degree are in a direct proportion relation.
In the scheme, the rendering precision is reduced by a mode of performing point rendering on the model, and the larger the acceleration data is, the higher the point cloud scattering degree is, so that the drift displayed in the displacement process can be hidden, and the display effect can be better realized.
As shown in fig. 4, the dotted line in fig. 4 is used to represent the model after point cloud, the object is worn with a bracelet at the wrist, and in the initial state, the object is calibrated by using the feature identifier on the bracelet, so that the three-dimensional model and the object can be quickly matched, and the matching degree is very high. When the acceleration of the object wrist is larger than a preset acceleration threshold, an intermediate state which is easy to move and disjointed is quickly identified, the model is rendered to be punctiform, the model is fuzzified, and the position supplementing processing is carried out. When the action is fixed, the rendered model follows the movement of the object and the original rendering precision is restored.
As an optional embodiment, in the process of rendering the three-dimensional model, the method further includes: acquiring a physiological parameter of a subject; and changing the parameter information of the three-dimensional model according to the physiological parameters.
The physiological parameters can still be obtained according to the wearable device. The physiological parameters may include: the current physiological state of the subject, such as whether the subject is excited or stressed, can be known by acquiring the physiological parameters of the subject, such as the temperature of the subject, the heart rate of the subject, the blood pressure of the subject, and the like. The parameter information of the model changed according to the physiological parameter may include a color of the model, a transparency of the model, and the like.
According to the scheme, the wearing equipment is used for detecting the physiological parameters of the object, the physiological parameters are returned to the server at the cloud end, and the server changes the parameter information of the model according to the physiological parameters, so that the linkage of the virtual model and the real human body is realized. And the data dimension of human digital twin display is increased.
In addition, the wearable device can also obtain the position parameters of the object through a Global Positioning System (GPS), so as to change the parameter information of the model.
As an optional embodiment, the wearable device further comprises: a heart rate sensor that modifies parameter information of the three-dimensional model based on the physiological parameter, comprising: acquiring heart rate data of a subject detected by a heart rate sensor; and adjusting the color information of the three-dimensional model according to the heart rate data.
In an alternative embodiment, the color of the model may be adjusted to red, or a part of the model may be adjusted to red, when the heart rate data of the subject is detected to be greater than a preset threshold. The proportion of red in the color of the model can be adjusted in real time according to the heart rate data of the subject, and the higher the heart rate of the subject is, the larger the proportion of red in the color of the model is, and the more the whole model is biased to red.
Above-mentioned scheme passes through wearing equipment detection object's rhythm of the heart data to change the colour of model according to rhythm of the heart data, so that three-dimensional model not only is associated with the appearance of object, can also be associated with the physiological data of object.
As an optional embodiment, the wearable device further comprises: the skin electric sensor changes parameter information of the three-dimensional model according to physiological parameters, and comprises: acquiring the sweat rate of a subject detected by a galvanic skin sensor; and adjusting the transparency information of the three-dimensional model according to the sweat rate.
The skin-electrical response is an emotional index, which represents the change of skin electrical conduction when the body is stimulated, and is generally expressed by resistance and logarithm thereof or conductance and square root thereof. The galvanic sensor may capture the sweat rate of the subject's skin surface to determine the mood of the subject. The transparency information of the three-dimensional model is adjusted according to the sweat rate, wherein the higher the sweat rate is, the higher the transparency of the three-dimensional model is, so that a viewer can feel the emotional information of the object through the linkage of the three-dimensional model and the physiological parameters of the object.
As an alternative embodiment, before acquiring image information in a scene, the method further comprises: creating a three-dimensional model of an object, wherein creating the three-dimensional model of the object comprises: the method comprises the steps of obtaining appearance parameters of an object by three-dimensional scanning of the object, and creating a three-dimensional model of the object based on the appearance parameters; or creating a general model according to preset appearance parameters, and correcting the general model according to at least one appearance parameter of the object to obtain a three-dimensional model of the object.
The above scheme provides two ways of creating a three-dimensional model of an object, and in the first way, the object is directly scanned in three dimensions to obtain shape parameters of the object, and the shape parameters may include: height parameters, face contour parameters, body contour parameters and the like, and based on the shape parameters, a three-dimensional model with higher similarity to the object can be constructed.
In the second mode, a pre-created general model may be directly obtained, and the general model may be modified according to a small amount of shape parameters of the object, so as to obtain a three-dimensional model of the object. The small number of shape parameters can be obtained not by scanning but by simple personal information, for example, if the subject is a male, the height of the general model is increased, and if the subject is a female, the rise of the general model is decreased.
The general models may be multiple sets, for example, multiple sets of shape parameters may be set, for example, young women, young men, middle women, and middle men and women, and then the general models may be created according to the multiple sets of shape parameters, so as to obtain multiple sets of general models. After the attributes of the object are determined, a corresponding one of the plurality of sets of common models may be selected. For example, if the object is a young male, a general model created from the external shape parameters of the young male may be selected.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is also provided an image processing method, and fig. 5 is a flowchart of an image processing method according to embodiment 2 of the present application, as shown in fig. 5, the method includes:
step S51, sending a model display request to a rendering processor, wherein the rendering processor acquires image information in a scene, the image information at least comprises a wearable device and an object carrying the wearable device, identifies a feature identifier on the wearable device, acquires the position of the feature identifier in the image information, and calibrates a three-dimensional model according to the position of the feature identifier in the image information during rendering of the three-dimensional model corresponding to the object.
The scheme in this embodiment may be performed by a user's viewing device, and when the user needs to view the digital twin model of the presenter, a model presentation request may be sent to the rendering processor. In an alternative embodiment, taking the scenario of the application in the release meeting as an example, the release meeting can be viewed through live viewing or live webcasting. Under the condition of on-site watching, an on-site audience can send a model display request to a rendering processor through a portable terminal device; in the case of live viewing over a network, a user may send a model presentation request to the rendering processor through an augmented reality device.
Specifically, the image information in the scene may be 2D image information acquired by using a common camera. Above-mentioned wearing equipment can be equipment such as bracelet, glasses, and the object of wearing this wearing equipment can be the user who wears the bracelet or dress glasses. In the scene of a release meeting, a presenter can wear a bracelet to demonstrate a product on a main desk. The image information obtained by the camera at least comprises a hand ring worn by a presenter. It should be noted that, in order to be able to detect the movements of the two arms of the presenter, the presenter may wear a bracelet on both arms. The feature identifier on the wearable device may be a feature color point or a feature color block arranged on the surface of the wearable device, the feature identifier is allowed to be seen by a user, or may be hidden by a user through special processing, as long as the image acquisition device can extract the feature identifier from the surface of the wearable device.
After the image acquisition device acquires the image information in the scene, the feature identification in the image information is identified. The feature information of the feature identifier may be pre-stored, for example: the shape of the characteristic mark, the color of the characteristic mark, the type of the characteristic mark and the like, and then the characteristic mark is identified according to the characteristic information of the characteristic mark. The position of the above-mentioned feature identifier in the image information may be represented by a coordinate parameter. After the feature identifier is identified in the acquired image information, the coordinate parameter of the feature identifier can be determined according to the coordinate system in the image information. The three-dimensional model may be a preset general model or a three-dimensional model corresponding to the wearing object.
The three-dimensional model is rendered, the mapping of the wearing object can be completed in a virtual space, the three-dimensional model is displayed in the watching equipment of a user, the watching equipment can be augmented reality equipment or mobile terminal equipment with an augmented reality function, and the three-dimensional model (virtual entity) capable of accurately reflecting the wearing object can be watched through the watching equipment. In the step, the three-dimensional model is calibrated according to the position of the identification information in the image information in the process of rendering the three-dimensional model. The calibration is used for finding an object to be simulated, namely a wearing object, from the image information, matching the wearing object with the three-dimensional model in the virtual reality, and displaying the corresponding three-dimensional model based on the action of the wearing object after the matching of the wearing object and the three-dimensional model is completed.
And step S53, receiving and displaying the rendering result of the rendering processor rendering the three-dimensional model.
In the case that identification information does not exist, in order to render the three-dimensional model, parameters such as bones and contours of the wearing object need to be located, so that the wearing object can be found from the image information, and the wearing object in the image information is matched with the wearing object in the three-dimensional model, so that the wearing object can be mapped in a digital twin manner. In the above scheme, since the position of the feature identifier in the image information is determined, and the position of the wearing device where the feature identifier is located on the wearing object is determined (for example, the bracelet is certainly worn on the wrist of the wearing object, the glasses are certainly worn on the face of the wearing object, and the like), matching between the wearing object and the three-dimensional model is not required to be performed by positioning parameters such as the skeleton, the outline, and the like of the wearing object in the image, so that matching between the wearing object and the three-dimensional model can be completed directly through the feature identifier, matching efficiency is improved, time required for operation is reduced, delay for displaying the three-dimensional model is reduced, and a drift effect caused by overlong delay in a rendering process is weakened.
According to the embodiment of the application, the characteristic identification is arranged on the wearable device, the three-dimensional model is calibrated according to the position of the characteristic identification in the image information in the process of rendering the three-dimensional model, the wearable object is not required to be matched with the three-dimensional model through parameters such as bones and outlines of the wearable object positioned in the image, the wearable object can be matched with the three-dimensional model directly through the characteristic identification, the matching efficiency is improved, the time required by operation is shortened, the time delay for displaying the three-dimensional model is further reduced, the drifting effect caused by overlong time delay in the rendering process is weakened, and the technical problem that the display effect is poor due to the fact that the drift is easily generated in the rendering process of a digital twin motion model in the prior art is solved.
The rendering server in this embodiment may also perform other steps in embodiment 1 without conflict, and details are not described here.
Example 3
According to an embodiment of the present invention, there is also provided a system for processing an image, and fig. 6 is a schematic diagram of a system for processing an image according to an embodiment of the present application, and as shown in the drawing, the system includes:
and a wearable device 60, wherein the wearable device is provided with characteristic marks.
Above-mentioned wearing equipment can be equipment such as bracelet, glasses, and the object of wearing this wearing equipment can be the user who wears the bracelet or dress glasses. In the scene of a release meeting, a presenter can wear a bracelet to demonstrate a product on a main desk. The image information obtained by the camera at least comprises a hand ring worn by a presenter. It should be noted that, in order to be able to detect the movements of the two arms of the presenter, the presenter may wear a bracelet on both arms.
The wearing device can also be a helmet, a neck ring and other devices, and the object wearing the wearing device can be an animal wearing the helmet or the neck ring. In the context of scientific observation of animals, the observed animal may wear a helmet or collar for voluntary activities. The image information acquired by the camera at least comprises a helmet or a neck ring worn by the observed animal.
The feature identifier on the wearable device may be a feature color point or a feature color block arranged on the surface of the wearable device, the feature identifier is allowed to be seen by a user, or may be hidden by a user through special processing, as long as the image acquisition device can extract the feature identifier from the surface of the wearable device.
And the image acquisition device 62 is used for acquiring image information in a scene and sending the image information to the rendering processor, wherein the image information at least comprises the wearable device and an object carrying the wearable device.
Specifically, the image information in the scene may be 2D image information acquired by using a common camera.
In an alternative embodiment, taking the scenario of the application in the release meeting as an example, the release meeting can be viewed through live viewing or live webcasting. Under the condition of on-site watching, on-site audiences can acquire image information in a scene through a portable terminal device; under the condition of live broadcast watching through a network, image information in a scene can be collected through a camera arranged on a release meeting site. The cameras deployed on site may be multiple to capture image information in a scene from different angles.
And the rendering processor 64 is configured to recognize the feature identifier on the wearable device, obtain the position of the feature identifier in the image information, and calibrate the three-dimensional model according to the position of the feature identifier in the image information during the process of rendering the three-dimensional model corresponding to the object.
The position of the above-mentioned feature identifier in the image information may be represented by a coordinate parameter. After the feature identifier is identified in the acquired image information, the coordinate parameter of the feature identifier can be determined according to the coordinate system in the image information. The three-dimensional model may be a preset general model or a three-dimensional model corresponding to the wearing object.
The three-dimensional model is rendered, the mapping of the wearing object can be completed in a virtual space, the three-dimensional model is displayed in the watching equipment of a user, the watching equipment can be augmented reality equipment or mobile terminal equipment with an augmented reality function, and the three-dimensional model (virtual entity) capable of accurately reflecting the wearing object can be watched through the watching equipment. In the step, the three-dimensional model is calibrated according to the position of the identification information in the image information in the process of rendering the three-dimensional model.
The calibration is used for finding an object to be simulated, namely a wearing object, from the image information, matching the wearing object with the three-dimensional model in the virtual reality, and displaying the corresponding three-dimensional model based on the action of the wearing object after the matching of the wearing object and the three-dimensional model is completed.
And the display device 66 is used for displaying the rendering result of rendering the three-dimensional model.
The display device is a viewing device used by a viewer for viewing, and may be an augmented reality device, or a mobile terminal device having an augmented reality function.
According to the embodiment of the application, the characteristic identification is arranged on the wearable device, the three-dimensional model is calibrated according to the position of the characteristic identification in the image information in the process of rendering the three-dimensional model, the wearable object is not required to be matched with the three-dimensional model through parameters such as bones and outlines of the wearable object positioned in the image, the wearable object can be matched with the three-dimensional model directly through the characteristic identification, the matching efficiency is improved, the time required by operation is shortened, the time delay for displaying the three-dimensional model is further reduced, the drifting effect caused by overlong time delay in the rendering process is weakened, and the technical problem that the display effect is poor due to the fact that the drift is easily generated in the rendering process of a digital twin motion model in the prior art is solved.
The rendering server in this embodiment may also perform other steps in embodiment 1 without conflict, and details are not described here.
Example 4
According to an embodiment of the present invention, there is also provided an image processing apparatus for implementing the image processing method in embodiment 1, and fig. 7 is a schematic diagram of an image processing apparatus according to embodiment 4 of the present application, as shown in fig. 7, the apparatus 700 includes:
a first obtaining module 702, configured to obtain image information in a scene, where the image information at least includes a wearable device and an object carrying the wearable device, and the wearable device has a feature identifier.
An identifying module 704 for identifying a feature identifier on the wearable device.
A second obtaining module 706, configured to obtain a three-dimensional model corresponding to the position of the feature identifier in the image information and the object.
A calibration module 708, configured to calibrate the three-dimensional model according to the position of the feature identifier in the image information during the rendering of the three-dimensional model.
It should be noted here that the first obtaining module 702, the identifying module 704, the second obtaining module 706 and the calibrating module 708 correspond to steps S31 to S37 in embodiment 1, and the four modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the modules described above as part of the apparatus may be run in the computer terminal 10 provided in the first embodiment.
As an alternative embodiment, the calibration module comprises: the recognition submodule is used for recognizing the object in the image information according to the position of the characteristic identifier in the image information; and the rendering submodule is used for rendering the three-dimensional model based on the identified object by taking the characteristic identifier as an origin in a world coordinate system.
As an alternative embodiment, the feature identifiers are color feature points of the surface of the wearable device.
As an alternative embodiment, the difference between the gray value of the color feature point and the gray value of the background color of the background where the wearable device is located is within a preset gray difference range.
As an alternative embodiment, the apparatus further comprises: the third acquisition module is used for acquiring acceleration data detected by the acceleration sensor when the wearable device comprises the acceleration sensor and the three-dimensional model is rendered; the comparison module is used for comparing the acceleration data with a preset acceleration threshold; and the reducing module is used for reducing the rendering precision in the rendering process if the acceleration data is greater than the acceleration threshold.
As an alternative embodiment, the lowering module comprises: and the punctiform processing submodule is used for rendering the three-dimensional model to be punctiform, wherein the acceleration data and the punctiform point cloud scattering degree are in a direct proportion relation.
As an alternative embodiment, the apparatus further comprises: the fourth acquisition module is used for acquiring the physiological parameters of the object in the process of rendering the three-dimensional model; and the changing module is used for changing the parameter information of the three-dimensional model according to the physiological parameters.
As an optional embodiment, the wearable device further comprises: a heart rate sensor, the modification module comprising: the first acquisition submodule is used for acquiring heart rate data of the object detected by the heart rate sensor; and the first adjusting submodule is used for adjusting the color information of the three-dimensional model according to the heart rate data.
As an optional embodiment, the wearable device further comprises: a galvanic sensor, the modification module comprising: the first acquisition submodule is used for acquiring the sweat rate of the object detected by the skin-electrical sensor; and the adjusting submodule is used for adjusting the transparency information of the three-dimensional model according to the sweat rate.
As an alternative embodiment, the apparatus further comprises: the system comprises a creating module, a processing module and a display module, wherein the creating module is used for creating a three-dimensional model of an object before acquiring image information in a scene, acquiring appearance parameters of the object by three-dimensionally scanning the object, and creating the three-dimensional model of the object based on the appearance parameters; or creating a general model according to preset appearance parameters, and correcting the general model according to at least one appearance parameter of the object to obtain a three-dimensional model of the object.
Example 5
According to an embodiment of the present invention, there is also provided an image processing apparatus for implementing the image processing method in embodiment 2, and fig. 8 is a schematic diagram of an image processing apparatus according to embodiment 5 of the present application, as shown in fig. 7, the apparatus 800 includes:
the sending module 802 is configured to send a model display request to a rendering processor, where the rendering processor obtains image information in a scene, where the image information at least includes a wearable device and an object carrying the wearable device, identifies a feature identifier on the wearable device, obtains a position of the feature identifier in the image information, and calibrates a three-dimensional model according to the position of the feature identifier in the image information during rendering of the three-dimensional model corresponding to the object.
The receiving module 804 is configured to receive and display a rendering result of the rendering processor rendering the three-dimensional model.
It should be noted here that the sending module 802 and the receiving module 804 correspond to steps S51 to S53 in embodiment 2, and the two modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the modules described above as part of the apparatus may be run in the computer terminal 10 provided in the first embodiment.
Example 6
The embodiment of the invention can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the image processing method: acquiring image information in a scene, wherein the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier; identifying a feature identifier on the wearable device; acquiring the position of the characteristic mark in the image information and a three-dimensional model corresponding to the object; and in the process of rendering the three-dimensional model, calibrating the three-dimensional model according to the position of the characteristic identifier in the image information.
Alternatively, fig. 9 is a block diagram of a computer terminal according to embodiment 6 of the present application. As shown in fig. 9, the computer terminal a may include: one or more processors 902 (only one of which is shown), a memory 906, and a peripheral interface 908.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the image processing method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, implements the image processing method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located from the processor, and these remote memories may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring image information in a scene, wherein the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier; identifying a feature identifier on the wearable device; acquiring the position of the characteristic mark in the image information and a three-dimensional model corresponding to the object; and in the process of rendering the three-dimensional model, calibrating the three-dimensional model according to the position of the characteristic identifier in the image information.
Optionally, the processor may further execute the program code of the following steps: identifying an object in the image information according to the position of the characteristic mark in the image information; and rendering the three-dimensional model based on the identified object by taking the characteristic identifier as an origin in a world coordinate system.
Optionally, the feature identifier is a color feature point of the surface of the wearable device.
Optionally, the difference between the gray value of the color feature point and the gray value of the background color of the background where the wearable device is located is within a preset gray difference range.
Optionally, the processor may further execute the program code of the following steps: the wearable device comprises an acceleration sensor, and acceleration data detected by the acceleration sensor is acquired in the process of rendering the three-dimensional model; comparing the acceleration data with a preset acceleration threshold; and if the acceleration data is larger than the acceleration threshold value, reducing the rendering precision in the rendering process.
Optionally, the processor may further execute the program code of the following steps: rendering the three-dimensional model to be dotted, wherein the acceleration data and the dotted point cloud scattering degree are in a direct proportion relation.
Optionally, the processor may further execute the program code of the following steps: acquiring physiological parameters of a subject in the process of rendering the three-dimensional model; and changing the parameter information of the three-dimensional model according to the physiological parameters.
Optionally, the processor may further execute the program code of the following steps: the wearing equipment still includes: the heart rate sensor is used for acquiring heart rate data of the object detected by the heart rate sensor; and adjusting the color information of the three-dimensional model according to the heart rate data.
Optionally, the processor may further execute the program code of the following steps: the wearing equipment still includes: the system comprises a skin electric sensor, a power supply and a power supply, wherein the skin electric sensor is used for acquiring the sweat rate of a subject detected by the skin electric sensor; and adjusting the transparency information of the three-dimensional model according to the sweat rate.
Optionally, the processor may further execute the program code of the following steps: creating a three-dimensional model of an object, wherein creating the three-dimensional model of the object comprises: the method comprises the steps of obtaining appearance parameters of an object by three-dimensional scanning of the object, and creating a three-dimensional model of the object based on the appearance parameters; or creating a general model according to preset appearance parameters, and correcting the general model according to at least one appearance parameter of the object to obtain a three-dimensional model of the object
The embodiment of the invention provides an image processing method. The characteristic identification is arranged on the wearable device, the three-dimensional model is calibrated according to the position of the characteristic identification in the image information in the process of rendering the three-dimensional model, so that the wearable object is not required to be matched with the three-dimensional model by positioning parameters such as bones and contours of the wearable object in the image, the matching of the wearable object and the three-dimensional model can be completed directly through the characteristic identification, the matching efficiency is improved, the time required by operation is shortened, the time delay for displaying the three-dimensional model is reduced, the drift effect caused by overlong time delay in the rendering process is weakened, and the technical problem that the display effect is poor due to the fact that drift is easily generated in the rendering process of a digital twin motion model in the prior art is solved.
It can be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 9 is a diagram illustrating a structure of the electronic device. For example, the computer terminal 90 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 7
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium may be configured to store a program code executed by the image processing method provided in the first embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring image information in a scene, wherein the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier; identifying a feature identifier on the wearable device; acquiring the position of the characteristic mark in the image information and a three-dimensional model corresponding to the object; and in the process of rendering the three-dimensional model, calibrating the three-dimensional model according to the position of the characteristic identifier in the image information.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A method of processing an image, comprising:
acquiring image information in a scene, wherein the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier;
identifying a feature identification on the wearable device;
acquiring the position of the feature identifier in the image information and a three-dimensional model corresponding to the object;
and calibrating the three-dimensional model according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model.
2. The method of claim 1, wherein calibrating the three-dimensional model according to the position of the feature identifier in the image information during rendering the three-dimensional model comprises:
identifying an object in the image information according to the position of the feature identifier in the image information;
and rendering the three-dimensional model based on the identified object by taking the feature identifier as an origin in a world coordinate system.
3. The method of claim 1, wherein the feature identifiers are color feature points of the surface of the wearable device.
4. The method of claim 3, wherein the difference between the gray value of the color feature point and the gray value of the background color of the background where the wearable device is located is within a preset gray difference range.
5. The method of claim 1, wherein the wearable device comprises an acceleration sensor, and wherein during rendering of the three-dimensional model, the method further comprises:
acquiring acceleration data detected by the acceleration sensor;
comparing the acceleration data with a preset acceleration threshold;
and if the acceleration data is larger than the acceleration threshold value, reducing the rendering precision in the rendering process.
6. The method of claim 5, wherein if the acceleration data is greater than the acceleration threshold, reducing rendering accuracy during rendering comprises:
rendering the three-dimensional model to be dotted, wherein the acceleration data and the dotted point cloud scattering degree are in a direct proportion relation.
7. The method of claim 1, wherein during rendering of the three-dimensional model, the method further comprises:
acquiring a physiological parameter of the subject;
and changing the parameter information of the three-dimensional model according to the physiological parameters.
8. The method of claim 7, wherein the wearable device further comprises: a heart rate sensor that modifies parameter information of the three-dimensional model based on the physiological parameter, comprising:
acquiring heart rate data of the subject detected by the heart rate sensor;
and adjusting the color information of the three-dimensional model according to the heart rate data.
9. The method of claim 7, wherein the wearable device further comprises: a galvanic sensor that modifies parameter information of the three-dimensional model according to the physiological parameter, comprising:
acquiring the sweat rate of the subject detected by the galvanic skin sensor;
and adjusting the transparency information of the three-dimensional model according to the sweat rate.
10. The method of claim 1, wherein prior to acquiring image information in a scene, the method further comprises: creating a three-dimensional model of the object, wherein creating the three-dimensional model of the object comprises:
acquiring the shape parameters of the object by three-dimensional scanning of the object, and creating a three-dimensional model of the object based on the shape parameters; or
And creating a general model according to preset appearance parameters, and correcting the general model according to at least one appearance parameter of the object to obtain a three-dimensional model of the object.
11. A method of processing an image, comprising:
sending a model display request to a rendering processor, wherein the rendering processor acquires image information in a scene, the image information at least comprises wearable equipment and an object carrying the wearable equipment, identifies a feature identifier on the wearable equipment, acquires the position of the feature identifier in the image information, and calibrates a three-dimensional model corresponding to the object according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model;
and receiving and displaying a rendering result of the rendering processor for rendering the three-dimensional model.
12. A system for processing an image, comprising:
the wearable device is provided with a characteristic mark;
the image acquisition device is used for acquiring image information in a scene and sending the image information to the rendering processor, wherein the image information at least comprises the wearable equipment and an object carrying the wearable equipment;
the rendering processor is used for identifying the feature identifier on the wearable device, acquiring the position of the feature identifier in the image information, and calibrating the three-dimensional model according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model corresponding to the object;
and the display equipment is used for displaying a rendering result of rendering the three-dimensional model.
13. An apparatus for processing an image, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring image information in a scene, the image information at least comprises wearing equipment and an object carrying the wearing equipment, and the wearing equipment is provided with a feature identifier;
the identification module is used for identifying the characteristic mark on the wearable device;
the second acquisition module is used for acquiring the position of the feature identifier in the image information and the three-dimensional model corresponding to the object;
and the calibration module is used for calibrating the three-dimensional model according to the position of the feature identifier in the image information in the process of rendering the three-dimensional model.
14. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, a device where the storage medium is located is controlled to execute the image processing method according to any one of claims 1 to 10.
15. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the method for processing the image according to any one of claims 1 to 10 when running.
CN202010219156.4A 2020-03-25 2020-03-25 Image processing method, device and system Pending CN113450448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010219156.4A CN113450448A (en) 2020-03-25 2020-03-25 Image processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010219156.4A CN113450448A (en) 2020-03-25 2020-03-25 Image processing method, device and system

Publications (1)

Publication Number Publication Date
CN113450448A true CN113450448A (en) 2021-09-28

Family

ID=77806882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010219156.4A Pending CN113450448A (en) 2020-03-25 2020-03-25 Image processing method, device and system

Country Status (1)

Country Link
CN (1) CN113450448A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114326492A (en) * 2021-12-20 2022-04-12 中国科学院上海高等研究院 Digital twin virtual-real linkage system of process industrial equipment
WO2022177505A1 (en) * 2021-02-17 2022-08-25 National University Of Singapore Methods relating to virtual reality systems and interactive objects

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022177505A1 (en) * 2021-02-17 2022-08-25 National University Of Singapore Methods relating to virtual reality systems and interactive objects
CN114326492A (en) * 2021-12-20 2022-04-12 中国科学院上海高等研究院 Digital twin virtual-real linkage system of process industrial equipment
CN114326492B (en) * 2021-12-20 2023-09-01 中国科学院上海高等研究院 Digital twin virtual-real linkage system of process industrial equipment

Similar Documents

Publication Publication Date Title
Yuan et al. A mixed reality virtual clothes try-on system
RU2668408C2 (en) Devices, systems and methods of virtualising mirror
CN110363867B (en) Virtual decorating system, method, device and medium
US20200066052A1 (en) System and method of superimposing a three-dimensional (3d) virtual garment on to a real-time video of a user
CN107390863B (en) Device control method and device, electronic device and storage medium
CN108447043B (en) Image synthesis method, equipment and computer readable medium
US20170357397A1 (en) Virtual object display device, method, program, and system
CN111754415B (en) Face image processing method and device, image equipment and storage medium
CN109815776B (en) Action prompting method and device, storage medium and electronic device
KR20190032084A (en) Apparatus and method for providing mixed reality content
CN204576413U (en) A kind of internet intelligent mirror based on natural user interface
CN106355479A (en) Virtual fitting method, virtual fitting glasses and virtual fitting system
CN108876886B (en) Image processing method and device and computer equipment
CN102156808A (en) System and method for improving try-on effect of reality real-time virtual ornament
US11842437B2 (en) Marker-less augmented reality system for mammoplasty pre-visualization
CN103413229A (en) Method and device for showing baldric try-on effect
CN113467619B (en) Picture display method and device, storage medium and electronic equipment
CN114821675B (en) Object processing method and system and processor
CN104750933A (en) Eyeglass trying on method and system based on Internet
JP2015219892A (en) Visual line analysis system and visual line analysis device
CN113450448A (en) Image processing method, device and system
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111291746A (en) Image processing system and image processing method
CN114187651A (en) Taijiquan training method and system based on mixed reality, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination