CN112529770B - Image processing method, device, electronic equipment and readable storage medium - Google Patents

Image processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112529770B
CN112529770B CN202011414651.7A CN202011414651A CN112529770B CN 112529770 B CN112529770 B CN 112529770B CN 202011414651 A CN202011414651 A CN 202011414651A CN 112529770 B CN112529770 B CN 112529770B
Authority
CN
China
Prior art keywords
input
dimensional
image
dimensional model
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011414651.7A
Other languages
Chinese (zh)
Other versions
CN112529770A (en
Inventor
秦美洋
朱丽君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011414651.7A priority Critical patent/CN112529770B/en
Publication of CN112529770A publication Critical patent/CN112529770A/en
Application granted granted Critical
Publication of CN112529770B publication Critical patent/CN112529770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a readable storage medium, and belongs to the technical field of image processing. The image processing method comprises the following steps: acquiring first depth information of a target image; projecting a three-dimensional model of a target image in a space where the electronic equipment is located according to the first depth information; receiving a first input of a three-dimensional model target position; in response to the first input, a three-dimensional size of the three-dimensional model target location is adjusted according to the input parameters of the first input. Therefore, the size of an object in the image can be accurately identified through the depth information of the image, a three-dimensional model is projected in space, a user can modify the value of the three-dimensional size of the three-dimensional model through the operation of the target position of the three-dimensional model, three-dimensional editing processing is realized, people or objects in the image are more vivid and three-dimensional, an exquisite and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.

Description

Image processing method, device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of image processing technology, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a readable storage medium.
Background
In the related art, the image editing mode of the depth image can only be operated on a plane, the image can not be edited according to the distance of an object and the size of a person, the problem that the image is inconsistent in size or insufficient in fineness is caused after the image is edited, and the image editing operation can not be realized on the plane, so that the overall stereoscopic impression and the delicacy of the image are not facilitated.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a readable storage medium, which can accurately identify depth information in an image, project a three-dimensional model in space, change two-dimensional editing into three-dimensional editing through the three-dimensional model, and enable people or static objects in the image to be more three-dimensional and exquisite.
In order to solve the above problems, the present application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring first depth information of a target image;
projecting a three-dimensional model of a target image in a space where the electronic equipment is located according to the first depth information;
receiving a first input of a three-dimensional model target position;
in response to the first input, a three-dimensional size of the three-dimensional model target location is adjusted according to the input parameters of the first input.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring first depth information of the target image;
the projection module is used for projecting a three-dimensional model of the target image in the space where the electronic equipment is located according to the first depth information;
the receiving module is used for receiving a first input of the target position of the three-dimensional model;
and the processing module is used for responding to the first input and adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instructions stored on the memory and running on the processor, which when executed by the processor, implement the steps of the image processing method as provided in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method as provided in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions for implementing the steps of the image processing method as provided in the first aspect.
In the embodiment of the application, first depth information of a target image is acquired; projecting a three-dimensional model of a target image in a space where the electronic equipment is located according to the first depth information; receiving a first input of a three-dimensional model target position; in response to the first input, a three-dimensional size of the three-dimensional model target location is adjusted according to the input parameters of the first input. Therefore, the size of an object in the image can be accurately identified through the depth information of the image, a three-dimensional model is projected in space, a user can modify the three-dimensional size of the three-dimensional model through the operation of the target position of the three-dimensional model, three-dimensional editing processing is realized, people or objects in the image are more vivid and three-dimensional, an exquisite and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
Drawings
FIG. 1 illustrates one of the flowcharts of an image processing method according to one embodiment of the present application;
FIG. 2 illustrates a second flowchart of an image processing method according to one embodiment of the present application;
FIG. 3 illustrates a third flowchart of an image processing method according to one embodiment of the present application;
FIG. 4 illustrates a fourth flow chart of an image processing method according to one embodiment of the present application;
FIG. 5 illustrates a fifth flow chart of an image processing method according to one embodiment of the present application;
FIG. 6 illustrates a sixth flowchart of an image processing method according to one embodiment of the present application;
FIG. 7 illustrates a seventh flow chart of an image processing method according to one embodiment of the present application;
FIG. 8 shows an eighth flowchart of an image processing method according to one embodiment of the present application;
FIG. 9 shows a Gaussian curve schematic of a depth image according to an embodiment of the application;
fig. 10 shows one of the block diagrams of the image processing apparatus according to one embodiment of the present application;
FIG. 11 shows a second block diagram of an image processing apparatus according to one embodiment of the present application;
FIG. 12 shows a third block diagram of an image processing apparatus according to one embodiment of the present application;
FIG. 13 illustrates a block diagram of an electronic device according to one embodiment of the present application;
fig. 14 shows a hardware configuration block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced otherwise than as described herein, and thus the scope of the present application is not limited by the specific embodiments disclosed below.
An image processing method, an image processing apparatus, an electronic device, and a readable storage medium according to some embodiments of the present application are described below with reference to fig. 1 to 14.
In one embodiment of the present application, fig. 1 shows one of flowcharts of an image processing method of the embodiment of the present application, including:
102, acquiring first depth information of a target image;
for example, the mobile phone enters an album editing interface, and the user clicks an image selection target image in the album and invokes first depth information recorded when the image is shot.
Step 104, a three-dimensional model of a target image is projected in the space where the electronic equipment is located according to the first depth information;
in this embodiment, the first depth information of each pixel point of the target image is read, and the three-dimensional size of the object or the person in the target image, that is, the pixel point coordinates (X, Y, Z axis coordinates) are obtained according to the first depth information, and the three-dimensional model corresponding to the target image is projected by using a plurality of projection devices in different directions according to the space where the pixel point coordinates are located, so that the user can view the three-dimensional shape of the object or the person in the target image through the three-dimensional model, and is convenient for the user to select the target position to be edited and modified.
It will be appreciated that after the three-dimensional model is projected in space, the three-dimensional model may also be synchronously displayed on a screen of the electronic device, as shown in fig. 9, with depth information of the target image reflected by a gaussian curve.
Step 106, receiving a first input of a target position of the three-dimensional model;
the image processing method is suitable for electronic equipment, wherein the electronic equipment comprises, but is not limited to, a mobile terminal, a tablet computer, a notebook computer, a wearable device, a vehicle-mounted terminal and the like. The first input may be an operation of the electronic device by a user, or an operation of the stereoscopic three-dimensional model by a user identified by the electronic device. Wherein the first input includes, but is not limited to, a click input, a key input, a fingerprint input, a swipe input, a press input. Key inputs include, but are not limited to, a power key, a volume key, a single click input of a home menu key, a double click input, a long press input, a combination key input, and the like, to an electronic device. The manner of operation of the embodiments of the present application is not particularly limited, and may be any manner that can be implemented.
It should be noted that, be provided with the photosensitive element array that comprises a plurality of photosensitive elements in the space that electronic equipment was located, can gather the luminance data in the different positions of three-dimensional model through the photosensitive element array, when the operation of user sheltered from the projection light beam of three-dimensional model, can confirm the position of user operation through a plurality of photosensitive elements gathering luminance data, and then discerns three-dimensional model target position.
For example, the finger of the user is placed at the position where the three-dimensional model needs to be edited, and the projection position of the finger, namely part of pixel points in the three-dimensional model, is perceived by the photosensitive element so as to be convenient for locally modifying a certain area in the image, so that three-dimensional editing processing is realized, the user can edit any position in the image, the local image repairing requirement of the user is met, and the image repairing accuracy is greatly improved.
Step 108, responding to the first input, and adjusting the three-dimensional size of the three-dimensional model target position according to the input parameters of the first input.
In this embodiment, the three-dimensional size of the three-dimensional model target position is replaced or modified according to the input parameters of the first input to the three-dimensional model target position, that is, the correction value of the image by the user, and the modified image is stored. Therefore, the two-dimensional editing of the image is changed into three-dimensional editing, so that the electronic equipment can perform three-dimensional editing operation, people or objects in the image are more vivid and three-dimensional, a more exquisite and perfect image is obtained, and the satisfaction degree of a user on the processed image is effectively improved.
It is worth mentioning that, after the three-dimensional size of the target position of the three-dimensional model is adjusted according to the input parameters, the projected three-dimensional model will also be changed accordingly, and a modified three-dimensional model is obtained, so that the user can view the modification effect of the target image in time.
In one embodiment of the present application, fig. 2 shows a second flowchart of the image processing method of the embodiment of the present application, and step 108, according to the input parameter of the first input, adjusts the three-dimensional size of the three-dimensional model target position, including:
step 202, identifying a motion start point and a motion end point of a first input;
step 204, determining the displacement between the motion start point and the motion end point;
in this embodiment, the first input may be a sliding input to the three-dimensional model, the movement start point and the movement end point of the sliding input are recognized, and the displacement between the movement start point and the movement end point is calculated. Wherein the displacement includes a direction and a distance.
Step 206, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation under the condition that the displacement belongs to the preset displacement interval;
and step 208, adjusting the three-dimensional size according to the size change amount.
In this embodiment, a correspondence relationship between the preset displacement interval and the size variation is preset, that is, different displacement intervals correspondingly indicate different size variation. Comparing the displacement between the motion start point and the motion end point with a preset displacement interval, and taking the size change corresponding to the preset displacement interval as a target image correction value appointed by the first input under the condition that the displacement belongs to the preset displacement interval. Therefore, the three-dimensional size can be modified in real time according to the size variation, so that a user can dynamically adjust the three-dimensional size through the sliding operation of the three-dimensional model, the three-dimensional scaling of the image is realized, the user can conveniently perceive the change of the three-dimensional model in the editing operation process, the image is prevented from being influenced by excessive modification of the user, the difficulty of repairing the image is reduced, and the overall or local stereoscopic impression and delicacy of the image are effectively improved.
Specifically, taking a portrait image as an example, a user needs to treat the hair with the nose and the head collapsed, and the fingers are placed on the nose or the head of the three-dimensional model to send out to determine the target position. And then the nose is pulled up by stretching operation, the flat hair is pulled up by head collapse, or the lateral wings of the nose are contracted by shortening operation, etc. During the stretching/shortening operation, the target position on the image changes with the change in the sliding input.
In one embodiment of the present application, fig. 3 shows a third flowchart of the image processing method of the embodiment of the present application, step 202, identifying a motion start point and a motion end point of the first input, including:
step 302, capturing a projection of a first input on a three-dimensional model;
step 304, generating a first input motion trail according to projection;
step 306, determining a motion start point and a motion end point according to the motion trail.
In this embodiment, the projected pixel positions of the first input of the user to the target position on the three-dimensional model are captured by the photosensitive element, and the plurality of projected pixel positions are connected to generate the motion trajectory of the first input. The motion start point and the motion end point of the first input can be identified according to the motion track, so that displacement between the two points can be conveniently determined, the size change quantity required by a user can be accurately identified, the three-dimensional size of the three-dimensional model can be modified through the size change quantity, three-dimensional editing processing is realized, and the whole or partial third dimension and delicacy of the image can be improved.
In one embodiment of the present application, FIG. 4 shows a fourth flowchart of an image processing method of an embodiment of the present application, step 302, capturing a projection of a first input on a three-dimensional model, comprising:
step 402, collecting brightness data of a three-dimensional model;
step 404, determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
In this embodiment, the photosensitive element array collects brightness data of different positions of the three-dimensional model, when the brightness data is smaller than or equal to a preset threshold value, the position is blocked, and possibly is a sliding operation position of a user, the position corresponding to the brightness data smaller than or equal to the preset threshold value is recorded as a projection position, so that a motion track input by the user can be determined through a projection set, the size variation required by the user can be accurately identified, and the three-dimensional size of the three-dimensional model can be modified through the size variation, so that three-dimensional editing processing is realized.
In one embodiment of the present application, fig. 5 shows a fifth flowchart of an image processing method of an embodiment of the present application, and step 108, adjusting the three-dimensional size of the three-dimensional model target position according to the input parameter of the first input, includes:
Step 502, displaying the numerical value of the three-dimensional size and the size threshold;
in this embodiment, after the three-dimensional size of the three-dimensional model is identified, the numerical value of the three-dimensional size and the corresponding size threshold are displayed on the electronic device so that the user knows the current size parameters and the modifiable size range of the object or person in the target image. Therefore, users can reasonably repair the graph according to the numerical value of the three-dimensional size and the size threshold value, the graph repair quality is improved, and the graph repair difficulty is reduced.
It should be noted that the three-dimensional coordinate threshold may be a maximum value and a minimum value of the three-dimensional size of the three-dimensional model, or may be an equal-scale adjustable range in the image reasonably set according to the requirement. Taking figure repair as an example, for the face thinning requirement, the size threshold is a preset value added to or subtracted from the three-dimensional size of the pixel point. Therefore, the reasonable picture repairing range of the user can be prompted through the display size threshold value, the influence on the image appearance caused by excessive modification of the user is avoided, and the picture repairing difficulty is reduced.
Step 504, adjusting the three-dimensional size according to the target three-dimensional size value corresponding to the first input.
Wherein the first input is for inputting a target three-dimensional size value.
In this embodiment, the first input may be a key input to the electronic device, and the specific value of the target three-dimensional size value input by the user, that is, the distance information of the axis of the three-dimensional model coordinate system X, Y, Z, is obtained through the first input. Therefore, the three-dimensional value of the target position of the three-dimensional model can be replaced according to the three-dimensional value of the target, the three-dimensional editing function of the electronic equipment on the target image is realized, and the whole or partial third dimension and the delicacy of the target image are improved.
In one embodiment of the present application, fig. 6 shows a sixth flowchart of an image processing method of the embodiment of the present application, including:
step 602, receiving a second input to the three-dimensional model;
in this embodiment, the second input may be a user operation on the electronic device, or may be an operation on a stereoscopic three-dimensional model identified by the electronic device. Wherein the second input includes, but is not limited to, a click input, a key input, a fingerprint input, a swipe input, a press input. Key inputs include, but are not limited to, a power key, a volume key, a single click input of a home menu key, a double click input, a long press input, a combination key input, and the like, to an electronic device. The manner of operation of the embodiments of the present application is not particularly limited, and may be any manner that can be implemented.
In response to the second input, the three-dimensional model is projected at a rotation angle corresponding to the second input, step 604.
In this embodiment, after projecting the three-dimensional model of the target image in the space where the electronic device is located according to the first depth of view information, the user can control the three-dimensional model to rotate through the second input to the three-dimensional model. Therefore, a user can view the three-dimensional model in an omnibearing way, the target position needing to be edited is favorably selected, the three-dimensional editing processing is realized, people or objects in the image are more vivid and three-dimensional, an exquisite and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
In particular, for example, the second input may be a key input of the electronic device by the user, where the key input indicates a specific value of the rotation angle, and the three-dimensional model may be projected through the rotation angle to implement rotation of the three-dimensional model. The second input may be a key input of the electronic device by the user, a control of a rotation angle is set on a screen of the electronic device, and the user adjusts the projection angle of the three-dimensional model by clicking the control of the rotation angle. In addition, the second input can also be the sliding input of the user to the three-dimensional model, the motion track of the second input is identified through the photosensitive element array, then the rotation angle corresponding to the second input is matched through the motion track, and the three-dimensional model is projected according to the rotation angle.
In one embodiment of the present application, fig. 7 shows a seventh flowchart of an image processing method of an embodiment of the present application, and step 102, before obtaining the first depth information of the target image, further includes:
step 702, displaying at least one depth image;
step 704, receiving a third input to the at least one depth image;
in this embodiment, the third input of the user to the at least one depth image may be an input of a finger of the user on the depth image, or an input of a touch device such as a stylus on the depth image.
In response to the third input, a target image is determined from the at least one depth image, step 706.
In this embodiment, at least one depth image is displayed on a screen of the electronic device, and the user can select a target image to be modified through a third input to the at least one depth image.
It will be appreciated that a third input trigger selected response function is predefined for the electronic device, the response function indicating at least one rule for triggering the selection of the target image. And when a third input of the electronic device is received by a user, matching the third input with a rule for selecting the target image, and when the first input meets the rule, triggering an operation for determining the target image from at least one depth image in response to the first input. For example, a rule is defined as a double-click depth image, and when a user performs an operation of double-clicking the depth image, the depth image is taken as a target image. Of course, the rule may also be clicking on the depth image and confirming the control, pressing the depth image for a long time to a specified time, etc., which is not particularly limited in the embodiments of the present application.
Specifically, taking an example of sharing a picture from an album, displaying an album interface, that is, a thumbnail display interface of at least one depth picture, on a desktop of an electronic device, a user clicking a thumbnail may select the depth picture, after selecting, a selected identifier, that is, "v", is displayed on the thumbnail of the picture, and in addition, the user lightly clicks the thumbnail, a large-image browsing mode of multiple thumbnails may be entered, so as to clearly view the thumbnail.
In one embodiment of the present application, fig. 8 shows an eighth flowchart of an image processing method according to an embodiment of the present application, and before displaying at least one depth image, step 702 further includes:
step 802, receiving a fourth input to the electronic device;
step 804, in response to the fourth input, turning on a depth camera of the electronic device;
the depth camera comprises a structured light camera and a general camera.
Step 806, collecting structured light coding information of a depth camera of the electronic device;
in this embodiment, when the electronic device receives the fourth input, the depth camera is turned on to take a photograph of the depth camera. The depth camera comprises a structured light camera and a general camera. The structured light camera may include a structured light projector and a structured light sensor. The structured light camera can adopt a structured light projector to project light spots, light slits, gratings, grids or speckles to the object to be measured, namely, the structured light can also be generated by adopting coherent light, stacked grating light, diffraction light and the like. The structured light sensor is then used to collect structured light encoding information of the object under test, for example, the encoded pattern after being modulated by the surface of the object under test.
Specifically, the structured light may be infrared light (Infrared Radiation, IR).
The projector includes, for example, a flash or a continuous light source.
Step 808, determining second depth information according to the structured light encoding information;
in this embodiment, since the light beam having a certain structure is in different depth regions of the object, the acquired image is changed relative to the original light beam structure, and then the change of the structure is converted into the second depth information by performing the ranging operation on the structured light encoding information.
For example, the structured light may be encoded in a spatial encoding manner, such as DeBrui jn sequence encoding; the structured light can also be encoded in an acquisition time encoding manner, such as binary encoding, gray encoding, etc.; the spatial encoding scheme may project only a single preset structured light encoding information, e.g., a single frame of structured light encoding pattern, and the temporal encoding scheme may project a plurality of different preset structured light encoding information, e.g., a plurality of frames of different structured light encoding patterns.
Specifically, for the space coding mode, after the collected structured light coding information is decoded, the matching relationship between the structured light coding information and the preset structured light coding information is obtained by comparing the structured light coding information with the preset structured light coding information, and the second depth information is calculated by combining the triangle ranging principle. For the time coding mode, the structured light sensor can collect a plurality of structured light coding information modulated by the surface of the moving object, and the second depth information is obtained by decoding the obtained structured light coding information and calculating by combining a triangle ranging principle.
And step 810, performing three-dimensional reconstruction by using the second depth information to obtain at least one depth image.
In this embodiment, after the second depth information is obtained, the three-dimensional size (X, Y, Z axis coordinates) of each pixel is generated from the second depth information of each pixel, and then three-dimensional reconstruction is performed from the three-dimensional size, so that a depth image can be obtained.
In one embodiment of the present application, as shown in fig. 10, an image processing apparatus 900 includes: the acquisition module 902, the acquisition module 902 is configured to acquire first depth information of a target image; the projection module 904 is configured to project a three-dimensional model of the target image in a space where the electronic device is located according to the first depth information; a receiving module 906, the receiving module 906 being configured to receive a first input of a target position of the three-dimensional model; the processing module 908 is configured to, in response to the first input, adjust a three-dimensional size of the three-dimensional model target location according to an input parameter of the first input.
In the embodiment, the size of an object in the image is accurately identified through the depth information of the image, a three-dimensional model is projected in space, a user can modify the numerical value of the three-dimensional size of the three-dimensional model through the operation of the target position of the three-dimensional model, three-dimensional editing processing is realized, people or objects in the image are more vivid and three-dimensional, an exquisite and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
Optionally, as shown in fig. 11, the image processing apparatus 900 further includes: the recognition module 910, the recognition module 910 is configured to recognize a motion start point and a motion end point of the first input; a determination module 912, the determination module 912 configured to determine a displacement between the motion start point and the motion end point; under the condition that the displacement belongs to a preset displacement interval, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation; the processing module 908 is further configured to adjust the three-dimensional size according to the size variation.
Optionally, the identification module 910 is specifically configured to: capturing a projection of a first input on the three-dimensional model; generating a first input motion trail according to the projection; and determining a motion starting point and a motion ending point according to the motion trail.
Optionally, the identification module 910 is specifically configured to: collecting brightness data of the three-dimensional model; and determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
Optionally, as shown in fig. 12, the image processing apparatus 900 further includes: the display module 916, the display module 916 is configured to display the numerical value of the three-dimensional size and the size threshold; the processing module 908 is further configured to adjust the three-dimensional size according to the target three-dimensional size value corresponding to the first input; wherein the first input is for inputting a target three-dimensional size value.
Optionally, the receiving module 906 is further configured to receive a second input to the three-dimensional model; the projection module 904 is further configured to project the three-dimensional model at a rotation angle corresponding to the second input in response to the second input.
Optionally, the display module 916 is further configured to display at least one depth image; the receiving module 906 is further configured to receive a third input to the at least one depth image; the acquisition module 902 is further operable to determine a target image from the at least one depth image in response to the third input.
Optionally, the receiving module 906 is further configured to receive a fourth input to the electronic device; the image processing apparatus 900 further includes: a start module (not shown) for turning on the depth camera of the electronic device in response to the fourth input; the acquisition module (not shown in the figure) is used for acquiring the structured light coding information of the depth camera; the obtaining module 902 is further configured to determine second depth information according to the structured light encoding information; and adopting the second depth information to perform three-dimensional reconstruction to obtain at least one depth image. The depth camera comprises a structured light camera and a general camera.
In this embodiment, the steps of the image processing method in any of the above embodiments are implemented when the respective modules of the image processing apparatus 900 perform the respective functions, so that the image processing apparatus also includes all the advantages of the image processing method in any of the above embodiments, which are not described herein.
The image processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The management device of the application program in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
In one embodiment of the present application, as shown in fig. 13, there is provided an electronic device 1000 comprising: the processor 1004, the memory 1002, and a program or instructions stored in the memory 1002 and executable on the processor 1004, which when executed by the processor 1004, implement the steps of the image processing method as provided in any of the embodiments described above, and therefore, the electronic device 1000 includes all the advantages of the image processing method as provided in any of the embodiments described above, and will not be described herein.
Fig. 14 is a block diagram of a hardware structure of an electronic device 1200 implementing an embodiment of the present application. The electronic device 1200 includes, but is not limited to: radio frequency unit 1202, network module 1204, audio output unit 1206, input unit 1208, sensor 1210, display unit 1212, user input unit 1214, interface unit 1216, memory 1218, processor 1220, and the like.
Those skilled in the art will appreciate that the electronic device 1200 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 1220 via a power management system such that charge, discharge, and power consumption management functions are performed by the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine certain components, or may be arranged in different components. In the embodiment of the application, the electronic device includes, but is not limited to, a mobile terminal, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer and the like.
Wherein the processor 1220 is configured to obtain first depth information of the target image; the display unit 1212 is configured to project a three-dimensional model of the target image in a space where the electronic device is located according to the first depth information; the user input unit 1214 is configured to receive a first input of a target position of the three-dimensional model; processor 1220 is configured to adjust a three-dimensional size of a three-dimensional model target location in response to the first input according to an input parameter of the first input.
Processor 1220 is also configured to identify a motion start point and a motion end point of the first input; determining a displacement between a motion start point and a motion end point; under the condition that the displacement belongs to a preset displacement interval, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation; and adjusting the three-dimensional size according to the size variation.
Further, processor 1220 is also configured to capture a projection of the first input on the three-dimensional model; generating a first input motion trail according to the projection; and determining a motion starting point and a motion ending point according to the motion trail.
Further, the processor 1220 is further configured to collect luminance data of the three-dimensional model; and determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
Further, the display unit 1212 is further configured to display a numerical value of the three-dimensional size and a size threshold value; processor 1220 is further configured to adjust the three-dimensional size according to the target three-dimensional size value corresponding to the first input; wherein the first input is for inputting a target three-dimensional size value.
Further, the user input unit 1214 is further configured to receive a second input to the three-dimensional model; the display unit 1212 is further configured to, in response to the second input, project the three-dimensional model at a rotation angle corresponding to the second input.
Further, the display unit 1212 is further configured to display at least one depth image; the user input unit 1214 is further for receiving a third input of at least one depth image; processor 1220 is also configured to determine, in response to the third input, a target image from the at least one depth image.
Further, the user input unit 1214 is also for receiving a fourth input to the electronic device; processor 1220 is further configured to turn on a depth camera of the electronic device in response to the fourth input; collecting structured light coding information of a depth camera; determining second depth information according to the structured light coding information; and adopting the second depth information to perform three-dimensional reconstruction to obtain at least one depth image.
It should be understood that, in the embodiment of the present application, the radio frequency unit 1202 may be configured to receive and transmit information or signals during a call, and specifically, receive downlink data of a base station or send uplink data to the base station. The radio frequency unit 1202 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The network module 1204 provides wireless broadband internet access to users, such as helping users send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 1206 may convert audio data received by the radio frequency unit 1202 or the network module 1204 or stored in the memory 1218 into an audio signal and output as sound. Also, the audio output unit 1206 may also provide audio output related to a particular function performed by the electronic device 1200 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1206 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1208 is used to receive an audio or video signal. The input unit 1208 may include a graphics processor (Graphics Processing Unit, GPU) 5082 and a microphone 5084, the graphics processor 5082 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1212, or stored in the memory 1218 (or other storage medium), or transmitted via the radio frequency unit 1202 or the network module 1204. The microphone 5084 may receive sound and may be capable of processing the sound into audio data, which may be converted into a format output that may be transmitted to a mobile communication base station via the radio frequency unit 1202 in case of a phone call mode.
The electronic device 1200 also includes at least one sensor 1210, such as a fingerprint sensor, pressure sensor, iris sensor, molecular sensor, gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and other sensors.
The display unit 1212 is used to display information input by a user or information provided to the user. The display unit 1212 may include a display panel 5122, and the display panel 5122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
The user input unit 1214 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. In particular, the user input unit 1214 includes a touch panel 5142 and other input devices 5144. The touch panel 5142, also referred to as a touch screen, can collect touch operations thereon or thereabout by a user. The touch panel 5142 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 1220, and receives and executes commands sent from the processor 1220. Other input devices 5144 can include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Further, the touch panel 5142 can be overlaid on the display panel 5122, and when the touch panel 5142 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 1220 to determine the type of touch event, and then the processor 1220 provides a corresponding visual output on the display panel 5122 according to the type of touch event. The touch panel 5142 and the display panel 5122 may be two independent components or may be integrated into one component.
The interface unit 1216 is an interface for connecting an external device to the electronic apparatus 1200. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1216 may be used to receive input (e.g., data information, power, etc.) from an external device and to transmit the received input to one or more elements within the electronic apparatus 1200 or may be used to transmit data between the electronic apparatus 1200 and an external device.
Memory 1218 may be used to store application programs as well as various data. The memory 1218 may include primarily a stored program area and a stored data area, wherein the stored program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebooks, etc.) created according to the use of the mobile terminal, etc. In addition, memory 1218 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Processor 1220 performs various functions of electronic device 1200 and processes data by running or executing application programs and/or modules stored in memory 1218, and invoking data stored in memory 1218, thereby performing overall monitoring of electronic device 1200. Processor 1220 may include one or more processing units; processor 1220 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes image processing operations.
In one embodiment of the present application, there is provided a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method as provided in any of the embodiments described above.
In this embodiment, the readable storage medium can implement each process of the image processing method provided in the embodiment of the present application, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
Wherein the processor is a processor in the communication device in the above embodiment. Readable storage media include computer readable storage media such as Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, each process of the embodiment of the image processing method can be realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and variations may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (8)

1. An image processing method, comprising:
acquiring first depth information of a target image;
projecting a three-dimensional model of the target image in a space where the electronic equipment is located according to the first depth information;
receiving a first input of a target position of the three-dimensional model;
responding to the first input, and adjusting the three-dimensional size of the three-dimensional model target position according to the input parameters of the first input;
the adjusting the three-dimensional size of the three-dimensional model target position according to the input parameters of the first input includes:
identifying a motion start point and a motion end point of the first input;
determining a displacement between the motion start point and the motion end point;
under the condition that the displacement belongs to a preset displacement interval, determining a size variation corresponding to the displacement according to a corresponding relation between the preset displacement interval and the size variation;
Adjusting the three-dimensional size according to the size variation;
or,
displaying the numerical value and the size threshold value of the three-dimensional size;
adjusting the three-dimensional size according to the target three-dimensional size value corresponding to the first input;
wherein the first input is for inputting the target three-dimensional size value.
2. The image processing method according to claim 1, wherein the identifying the motion start point and the motion end point of the first input includes:
capturing a projection of the first input on the three-dimensional model;
generating a motion trail of the first input according to the projection;
and determining the motion starting point and the motion ending point according to the motion trail.
3. The image processing method of claim 2, wherein said capturing a projection of said first input on said three-dimensional model comprises:
collecting brightness data of the three-dimensional model;
and determining the projection according to the position corresponding to the brightness data which is smaller than or equal to a preset threshold value.
4. An image processing apparatus, comprising:
the acquisition module is used for acquiring first depth information of the target image;
the projection module is used for projecting a three-dimensional model of the target image in the space where the electronic equipment is located according to the first depth information;
A receiving module for receiving a first input of a target position of the three-dimensional model;
the processing module is used for responding to the first input and adjusting the three-dimensional size of the three-dimensional model target position according to the input parameters of the first input;
the identification module is used for identifying a motion starting point and a motion ending point of the first input;
a determination module for determining a displacement between the motion start point and the motion end point; and under the condition that the displacement belongs to a preset displacement interval, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation;
the processing module is also used for adjusting the three-dimensional size according to the size variation;
or,
the image processing apparatus further includes:
the display module is used for displaying the numerical value and the size threshold value of the three-dimensional size;
the processing module is further used for adjusting the three-dimensional size according to the target three-dimensional size value corresponding to the first input;
wherein the first input is for inputting the target three-dimensional size value.
5. The image processing device according to claim 4, wherein the identification module is specifically configured to:
Capturing a projection of the first input on the three-dimensional model;
generating a motion trail of the first input according to the projection;
and determining the motion starting point and the motion ending point according to the motion trail.
6. The image processing apparatus according to claim 5, wherein the identification module is specifically configured to:
collecting brightness data of the three-dimensional model;
and determining the projection according to the position corresponding to the brightness data which is smaller than or equal to a preset threshold value.
7. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image processing method of any one of claims 1 to 3.
8. A computer-readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 3.
CN202011414651.7A 2020-12-07 2020-12-07 Image processing method, device, electronic equipment and readable storage medium Active CN112529770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011414651.7A CN112529770B (en) 2020-12-07 2020-12-07 Image processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011414651.7A CN112529770B (en) 2020-12-07 2020-12-07 Image processing method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112529770A CN112529770A (en) 2021-03-19
CN112529770B true CN112529770B (en) 2024-01-26

Family

ID=74997819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011414651.7A Active CN112529770B (en) 2020-12-07 2020-12-07 Image processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112529770B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487727B (en) * 2021-07-14 2022-09-02 广西民族大学 Three-dimensional modeling system, device and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055991A (en) * 2009-10-27 2011-05-11 深圳Tcl新技术有限公司 Conversion method and conversion device for converting two-dimensional image into three-dimensional image
EP2347714A1 (en) * 2010-01-26 2011-07-27 Medison Co., Ltd. Performing image process and size measurement upon a three-dimensional ultrasound image in an ultrasound system
CN107393017A (en) * 2017-08-11 2017-11-24 北京铂石空间科技有限公司 Image processing method, device, electronic equipment and storage medium
CN108241434A (en) * 2018-01-03 2018-07-03 广东欧珀移动通信有限公司 Man-machine interaction method, device, medium and mobile terminal based on depth of view information
CN108550182A (en) * 2018-03-15 2018-09-18 维沃移动通信有限公司 A kind of three-dimensional modeling method and terminal
CN109727191A (en) * 2018-12-26 2019-05-07 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110908517A (en) * 2019-11-29 2020-03-24 维沃移动通信有限公司 Image editing method, image editing device, electronic equipment and medium
CN111369681A (en) * 2020-03-02 2020-07-03 腾讯科技(深圳)有限公司 Three-dimensional model reconstruction method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5765070B2 (en) * 2011-06-13 2015-08-19 ソニー株式会社 Display switching device, display switching method, display switching program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055991A (en) * 2009-10-27 2011-05-11 深圳Tcl新技术有限公司 Conversion method and conversion device for converting two-dimensional image into three-dimensional image
EP2347714A1 (en) * 2010-01-26 2011-07-27 Medison Co., Ltd. Performing image process and size measurement upon a three-dimensional ultrasound image in an ultrasound system
CN107393017A (en) * 2017-08-11 2017-11-24 北京铂石空间科技有限公司 Image processing method, device, electronic equipment and storage medium
CN108241434A (en) * 2018-01-03 2018-07-03 广东欧珀移动通信有限公司 Man-machine interaction method, device, medium and mobile terminal based on depth of view information
CN108550182A (en) * 2018-03-15 2018-09-18 维沃移动通信有限公司 A kind of three-dimensional modeling method and terminal
CN109727191A (en) * 2018-12-26 2019-05-07 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110908517A (en) * 2019-11-29 2020-03-24 维沃移动通信有限公司 Image editing method, image editing device, electronic equipment and medium
CN111369681A (en) * 2020-03-02 2020-07-03 腾讯科技(深圳)有限公司 Three-dimensional model reconstruction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112529770A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2020216054A1 (en) Sight line tracking model training method, and sight line tracking method and device
KR101800617B1 (en) Display apparatus and Method for video calling thereof
CN107124543B (en) Shooting method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
WO2019174628A1 (en) Photographing method and mobile terminal
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN108989678B (en) Image processing method and mobile terminal
JP2015526927A (en) Context-driven adjustment of camera parameters
CN107172347B (en) Photographing method and terminal
CN103365488A (en) Information processing apparatus, program, and information processing method
CN112669381B (en) Pose determination method and device, electronic equipment and storage medium
WO2019214641A1 (en) Optical tag based information apparatus interaction method and system
CN111083374B (en) Filter adding method and electronic equipment
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN112529770B (en) Image processing method, device, electronic equipment and readable storage medium
CN109639981B (en) Image shooting method and mobile terminal
CN113852756B (en) Image acquisition method, device, equipment and storage medium
US11360588B2 (en) Device, method, and program for generating multidimensional reaction-type image, and method, and program for reproducing multidimensional reaction-type image
CN110942064B (en) Image processing method and device and electronic equipment
CN108550182B (en) Three-dimensional modeling method and terminal
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium
CN108174101B (en) Shooting method and device
US10990802B2 (en) Imaging apparatus providing out focusing and method for controlling the same
CN112887515B (en) Video generation method and device
CN111880422B (en) Equipment control method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant