CN108921102B - 3D image processing method and device - Google Patents
3D image processing method and device Download PDFInfo
- Publication number
- CN108921102B CN108921102B CN201810728026.6A CN201810728026A CN108921102B CN 108921102 B CN108921102 B CN 108921102B CN 201810728026 A CN201810728026 A CN 201810728026A CN 108921102 B CN108921102 B CN 108921102B
- Authority
- CN
- China
- Prior art keywords
- model
- target
- processing
- sub
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 239000011521 glass Substances 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/06—Decision making techniques; Pattern matching strategies
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Computational Linguistics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a processing method and a processing device of a 3D image, wherein the processing device comprises a plurality of 3D cameras and a processing end, the processing end comprises a generating module, an identifying module, an acquiring module and a sending module, and the 3D cameras are used for acquiring the 3D images of a plurality of shooting targets in a space; the generation module is used for generating a 3D model according to the 3D images at the same moment; the recognition module is used for acquiring a 3D sub-model corresponding to a shooting target in the 3D model, and the 3D sub-model only comprises a face model; the acquisition module is used for acquiring a target image converted by the 3D sub-model, and the sending module is used for sending the target image to the terminal equipment matched with the target image. The 3D image processing method and the device can acquire the 3D sub-model of a single person in the model image through the 3D model image and feed back the image of the sub-model to the terminal equipment, thereby being convenient for monitoring and automatically detecting the behavior of the target.
Description
Technical Field
The invention relates to a method and a device for processing a 3D image.
Background
The 3D camera, which is manufactured by using a 3D lens, generally has two or more image pickup lenses, and has a pitch close to the pitch of human eyes, and can capture different images of the same scene seen by similar human eyes. The holographic 3D has a disc 5 above the lens, and can view the same image in all directions through dot grating imaging or -shaped grating holographic imaging, such as being in the environment.
The first 3D camera to date the 3D revolution has all been around the hollywood heavy-pound large and major sporting events. With the advent of 3D cameras, this technology is one step closer to home users. After the camera is introduced, each memorable moment of the life, such as the first step taken by a child, a university graduation celebration and the like, can be captured by using a 3D lens in the future.
A 3D camera typically has more than two lenses. The 3D camera functions like a human brain, and can fuse two lens images together to form a 3D image. These images can be played on a 3D television, and can be viewed by viewers wearing so-called actively shuttered glasses, or directly viewed by naked-eye 3D display devices. The 3D shutter glasses can rapidly alternately open and close the lenses of the left and right glasses at a rate of 60 times per second. This means that each eye sees a slightly different picture of the same scene, so the brain can thus think that it is enjoying a single picture in 3D.
The existing 3D camera has the defects of single function and small application range.
Disclosure of Invention
The invention aims to overcome the defects of single function and small application range of a 3D camera in the prior art, and provides a 3D image processing method and a device which are convenient to monitor and automatically detect the behavior of a target.
The invention solves the technical problems through the following technical scheme:
a processing device for 3D images is characterized in that the processing device comprises a plurality of 3D cameras and a processing end, the processing end comprises a generating module, an identifying module, an acquiring module and a sending module,
the 3D camera is used for acquiring 3D images of a plurality of shooting targets in a space;
the generation module is used for generating a 3D model according to the 3D images at the same moment;
the identification module is used for acquiring a 3D submodel corresponding to a shooting target in the 3D model, and the 3D submodel only comprises a face model;
the acquisition module is used for acquiring a target image converted by the 3D sub-model, wherein the target image is a 2D picture, a 2D animation, a 3D image or a 3D animation;
the sending module is used for sending the target image to the terminal equipment matched with the target image.
Preferably, the processing device further comprises a microphone, the processing end further comprises a voice module and a processing module,
the microphone is used for collecting voice in the space;
the voice module is used for acquiring a first target voice and a second target voice from the voice through voiceprint recognition;
the processing module is used for identifying an interrogative sentence in a first target voice and converting the interrogative sentence into characters;
the acquisition module is used for acquiring a target sub-model, and the target sub-model is a 3D sub-model corresponding to a shooting target which sends out second target voice;
the sending module is used for sending a target image of a target sub-model and characters corresponding to an interrogative sentence in a first target voice of a second target voice to the terminal equipment matched with the target image, and the target image and voice comprise the second target voice.
Preferably, the voice module is used for recognizing the identity of the second target voice,
the acquisition module is used for identifying the target sub-model corresponding to the identity through a face recognition technology.
Preferably, the processing end comprises a selecting module,
for each 3D model, the selection module is used for selecting a plurality of object feature points at preset distance positions away from the center by taking a face model as the center;
the recognition module is used for intercepting a 3D sub-model of the face model from a 3D model according to the object feature points;
for one face model, the acquisition module is used for acquiring all 3D submodels of the face model and arranging all the 3D submodels according to time sequence to generate 3D animation corresponding to the face model.
Preferably, the processing end further includes a projection module, and the projection module is configured to project each frame of the 3D animation into a 2D picture, and arrange all the 2D pictures according to a time sequence to generate the 2D animation corresponding to the face model.
The invention also provides a processing method of a 3D image, which is characterized in that the processing method is implemented by the processing device, the processing device comprises a plurality of 3D cameras and a processing end, and the processing method comprises:
the 3D camera acquires 3D images of a plurality of shooting targets in a space;
the processing end generates a 3D model according to the 3D images at the same moment;
the processing terminal obtains a 3D sub-model corresponding to a shooting target in the 3D model, wherein the 3D sub-model only comprises a human face model;
the processing terminal obtains a target image converted by the 3D sub-model, wherein the target image is a 2D picture, a 2D animation, a 3D image or a 3D animation;
and the processing terminal sends the target image to the terminal equipment matched with the target image.
Preferably, the processing device further includes a microphone, and the processing method includes:
the microphone collects speech in the space;
the processing terminal acquires a first target voice and a second target voice from the voice through voiceprint recognition;
the processing end identifies an interrogative sentence in first target voice and converts the interrogative sentence into characters;
the processing terminal acquires a target sub-model, wherein the target sub-model is a 3D sub-model corresponding to a shooting target which sends out second target voice;
and the processing terminal sends a target image of a target sub-model and characters corresponding to an interrogative sentence in a first target voice of a second target voice to the terminal equipment matched with the target image, wherein the target image and voice comprise the second target voice.
Preferably, the processing method includes:
the processing end identifies the identity of the second target voice,
and the processing terminal identifies the target sub-model corresponding to the identity through a face recognition technology.
Preferably, the processing method comprises:
for each 3D model, the processing end takes a face model as a center, and a plurality of object feature points are selected at preset distance positions from the center;
the processing end intercepts a 3D sub-model of the face model from a 3D model according to the object feature points;
for one face model, the processing terminal acquires all 3D sub-models of the face model, and arranges all the 3D sub-models according to time sequence to generate 3D animation corresponding to the face model.
Preferably, the processing method comprises:
and the processing terminal projects each frame of the 3D animation into a 2D picture, and arranges all the 2D pictures according to a time sequence to generate the 2D animation corresponding to the human face model.
On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the invention.
The positive progress effects of the invention are as follows:
the 3D image processing method and the device can acquire the 3D sub-model of a single person in the model image through the 3D model image and feed back the image of the sub-model to the terminal equipment, thereby being convenient for monitoring and automatically detecting the behavior of the target.
Drawings
FIG. 1 is a flowchart of a processing method of embodiment 1 of the present invention.
FIG. 2 is a flowchart of a processing method according to embodiment 2 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
The present embodiment provides a processing apparatus for 3D images, the processing apparatus includes a plurality of 3D cameras, a processing end and a microphone.
The processing terminal comprises a generating module, a recognition module, an acquisition module, a sending module, a voice module and a processing module.
The 3D camera is used for acquiring 3D images of a plurality of shooting targets in a space.
The space may be a room, a classroom, a jail cell, or the like, and each of the photographic objects may be a person.
The generation module is used for generating a 3D model according to the 3D images at the same moment.
A 3D model can be generated by stitching 3D images acquired by a plurality of 3D cameras.
The identification module is used for acquiring a 3D sub-model corresponding to a shooting target in the 3D model, and the 3D sub-model only comprises a human face model.
In the embodiment, a 3D sub-model only comprising a face model is extracted from a 3D model.
The acquisition module is used for acquiring a target image converted by the 3D sub-model, wherein the target image is a 2D picture, a 2D animation, a 3D image or a 3D animation, the target image is the 3D animation in the embodiment, and the 3D animation is generated by arranging the 3D sub-models one frame by one frame according to a time sequence.
The sending module is used for sending the target image to the terminal equipment matched with the target image.
By using the microphone, the processing device of the embodiment can accurately identify the scene and the speaking person.
The microphone is used for collecting voice in the space;
the voice module is used for acquiring a first target voice and a second target voice from the voice through voiceprint recognition;
the processing module is used for identifying an interrogative sentence in a first target voice and converting the interrogative sentence into characters;
the voice module is used for identifying the identity of the second target voice.
The acquisition module is used for identifying the target sub-model corresponding to the identity through a face recognition technology. The target submodel is a 3D submodel corresponding to a shooting target sending out second target voice;
the sending module is used for sending a target image of a target sub-model and characters corresponding to an interrogative sentence in a first target voice of a second target voice to the terminal equipment matched with the target image, and the target image and voice comprise the second target voice.
The concrete scene of using of this embodiment does, sets up the teacher's of lecture pronunciation into first target pronunciation, when the teacher gives a lecture to ask a question, the student of answer is second target pronunciation, and the processing apparatus of this embodiment can give the head of a family with the audio-visual transmission of the question and the student answer of teacher's asking a question.
With the processing apparatus, the present embodiment further provides a processing method, including:
103, the processing end acquires a first target voice and a second target voice from the voice through voiceprint recognition;
and 105, the processing terminal identifies the identity of the second target voice.
And 106, identifying the target sub-model corresponding to the identity by the processing terminal through a face recognition technology.
The target sub-model is a 3D sub-model corresponding to a shooting target for sending out second target voice.
Example 2
This embodiment is substantially the same as embodiment 1 except that:
the processing end comprises a selection module.
For each 3D model, the selection module is used for selecting a plurality of object feature points at preset distance positions away from the center by taking a face model as the center;
the recognition module is used for intercepting a 3D sub-model of the face model from a 3D model according to the object feature points;
for one face model, the acquisition module is used for acquiring all 3D submodels of the face model and arranging all the 3D submodels according to time sequence to generate 3D animation corresponding to the face model.
When the 3D submodels are arranged, all the 3D submodels are obtained through the object feature points, the sizes of the 3D submodels can be matched, and further, the object feature points of the two adjacent 3D submodels are aligned, and the image quality of the 3D animation is improved.
The processing terminal also comprises a projection module, wherein the projection module is used for projecting each frame of the 3D animation into a 2D picture, and arranging all the 2D pictures according to a time sequence to generate the 2D animation corresponding to the human face model.
Correspondingly, the processing method of the embodiment includes:
200, for each 3D model, the processing end takes a face model as a center, and a plurality of object feature points are selected at preset distance positions away from the center;
And 203, the processing terminal projects each frame of the 3D animation into a 2D picture, and arranges all the 2D pictures according to a time sequence to generate the 2D animation corresponding to the human face model.
The processing method of this embodiment can generate the 3D submodel of step 106 in embodiment 1, so that the 3D submodel is more fidelity.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.
Claims (8)
1. A processing device for 3D images is characterized in that the processing device comprises a plurality of 3D cameras and a processing end, the processing end comprises a generating module, an identifying module, an acquiring module and a sending module,
the 3D camera is used for acquiring 3D images of a plurality of shooting targets in a space;
the generation module is used for generating a 3D model according to the 3D images at the same moment;
the identification module is used for acquiring a 3D sub-model corresponding to a shooting target in the 3D model, and the 3D sub-model only comprises a face model;
the acquisition module is used for acquiring a target image converted by the 3D sub-model, wherein the target image is a 2D picture, a 2D animation, a 3D image or a 3D animation;
the sending module is used for sending the target image to the terminal equipment matched with the target image;
wherein, the processing device also comprises a microphone, the processing end also comprises a voice module and a processing module,
the microphone is used for collecting voice in the space;
the voice module is used for acquiring a first target voice and a second target voice from the voice through voiceprint recognition;
the processing module is used for identifying an interrogative sentence in a first target voice and converting the interrogative sentence into characters;
the acquisition module is used for acquiring a target sub-model, and the target sub-model is a 3D sub-model corresponding to a shooting target which sends out second target voice;
the sending module is used for sending a target image of a target sub-model and characters corresponding to an interrogative sentence in a first target voice of a second target voice to the terminal equipment matched with the target image, wherein the target image comprises the second target voice.
2. The processing apparatus of claim 1,
the voice module is used for identifying the identity of the second target voice,
the acquisition module is used for identifying the target sub-model corresponding to the identity through a face recognition technology.
3. The processing apparatus of claim 1, wherein the processing end comprises a selection module,
for each 3D model, the selection module is used for selecting a plurality of object feature points at preset distance positions away from the center by taking a face model as the center;
the recognition module is used for intercepting a 3D sub-model of the face model from a 3D model according to the object feature points;
for one face model, the acquisition module is used for acquiring all 3D submodels of the face model and arranging all the 3D submodels according to time sequence to generate 3D animation corresponding to the face model.
4. The processing apparatus as claimed in claim 3, wherein the processing end further includes a projection module, and the projection module is configured to project each frame of the 3D animation into a 2D picture, and arrange all the 2D pictures in a time sequence to generate the 2D animation corresponding to the face model.
5. A method for processing 3D images, the method being implemented by a processing apparatus according to any one of claims 1 to 4, the processing apparatus comprising a plurality of 3D cameras and a processing end, the method comprising:
the 3D camera acquires 3D images of a plurality of shooting targets in a space;
the processing end generates a 3D model according to the 3D images at the same moment;
the processing terminal obtains a 3D sub-model corresponding to a shooting target in the 3D model, wherein the 3D sub-model only comprises a human face model;
the processing terminal obtains a target image converted by the 3D sub-model, wherein the target image is a 2D picture, a 2D animation, a 3D image or a 3D animation;
the processing terminal sends the target image to the terminal equipment matched with the target image;
wherein, the processing device also comprises a microphone, and the processing method comprises the following steps:
the microphone collects speech in the space;
the processing terminal acquires a first target voice and a second target voice from the voice through voiceprint recognition;
the processing end identifies an interrogative sentence in first target voice and converts the interrogative sentence into characters;
the processing terminal acquires a target sub-model, wherein the target sub-model is a 3D sub-model corresponding to a shooting target which sends out second target voice;
and the processing terminal sends a target image of a target sub-model and characters corresponding to an interrogative sentence in a first target voice of a second target voice to the terminal equipment matched with the target image, wherein the target image comprises the second target voice.
6. The processing method of claim 5, wherein the processing method comprises:
the processing end identifies the identity of the second target voice,
and the processing terminal identifies the target sub-model corresponding to the identity through a face recognition technology.
7. The processing method of claim 5, wherein the processing method comprises:
for each 3D model, the processing end takes a face model as a center, and a plurality of object feature points are selected at preset distance positions from the center;
the processing end intercepts a 3D sub-model of the face model from a 3D model according to the object feature points;
for one face model, the processing terminal acquires all 3D sub-models of the face model, and arranges all the 3D sub-models according to time sequence to generate 3D animation corresponding to the face model.
8. The processing method of claim 7, wherein the processing method comprises:
and the processing terminal projects each frame of the 3D animation into a 2D picture, and arranges all the 2D pictures according to a time sequence to generate the 2D animation corresponding to the human face model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810728026.6A CN108921102B (en) | 2018-07-05 | 2018-07-05 | 3D image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810728026.6A CN108921102B (en) | 2018-07-05 | 2018-07-05 | 3D image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108921102A CN108921102A (en) | 2018-11-30 |
CN108921102B true CN108921102B (en) | 2022-07-05 |
Family
ID=64425273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810728026.6A Active CN108921102B (en) | 2018-07-05 | 2018-07-05 | 3D image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108921102B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110288680A (en) * | 2019-05-30 | 2019-09-27 | 盎锐(上海)信息科技有限公司 | Image generating method and mobile terminal |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184562A (en) * | 2011-05-10 | 2011-09-14 | 深圳大学 | Method and system for automatically constructing three-dimensional face animation model |
CN103281507A (en) * | 2013-05-06 | 2013-09-04 | 上海大学 | Videophone system and videophone method based on true three-dimensional display |
CN103985172A (en) * | 2014-05-14 | 2014-08-13 | 南京国安光电科技有限公司 | An access control system based on three-dimensional face identification |
KR101499350B1 (en) * | 2013-10-10 | 2015-03-12 | 재단법인대구경북과학기술원 | System and method for decoding password using 3d gesture recognition |
CN106426180A (en) * | 2016-11-24 | 2017-02-22 | 深圳市旗瀚云技术有限公司 | Robot capable of carrying out intelligent following based on face tracking |
EP3147827A1 (en) * | 2015-06-24 | 2017-03-29 | Samsung Electronics Co., Ltd. | Face recognition method and apparatus |
CN107592449A (en) * | 2017-08-09 | 2018-01-16 | 广东欧珀移动通信有限公司 | Three-dimension modeling method, apparatus and mobile terminal |
CN108174154A (en) * | 2017-12-29 | 2018-06-15 | 佛山市幻云科技有限公司 | Long-distance video method, apparatus and server |
CN109978734A (en) * | 2019-01-30 | 2019-07-05 | 深圳市致善教育科技有限公司 | A kind of intelligent operation system based on desk lamp |
-
2018
- 2018-07-05 CN CN201810728026.6A patent/CN108921102B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184562A (en) * | 2011-05-10 | 2011-09-14 | 深圳大学 | Method and system for automatically constructing three-dimensional face animation model |
CN103281507A (en) * | 2013-05-06 | 2013-09-04 | 上海大学 | Videophone system and videophone method based on true three-dimensional display |
KR101499350B1 (en) * | 2013-10-10 | 2015-03-12 | 재단법인대구경북과학기술원 | System and method for decoding password using 3d gesture recognition |
CN103985172A (en) * | 2014-05-14 | 2014-08-13 | 南京国安光电科技有限公司 | An access control system based on three-dimensional face identification |
EP3147827A1 (en) * | 2015-06-24 | 2017-03-29 | Samsung Electronics Co., Ltd. | Face recognition method and apparatus |
CN106426180A (en) * | 2016-11-24 | 2017-02-22 | 深圳市旗瀚云技术有限公司 | Robot capable of carrying out intelligent following based on face tracking |
CN107592449A (en) * | 2017-08-09 | 2018-01-16 | 广东欧珀移动通信有限公司 | Three-dimension modeling method, apparatus and mobile terminal |
CN108174154A (en) * | 2017-12-29 | 2018-06-15 | 佛山市幻云科技有限公司 | Long-distance video method, apparatus and server |
CN109978734A (en) * | 2019-01-30 | 2019-07-05 | 深圳市致善教育科技有限公司 | A kind of intelligent operation system based on desk lamp |
Non-Patent Citations (2)
Title |
---|
三维人脸建模与表情动画技术研究;商圣贺;《万方数据》;20130320;1-59 * |
世界首款3D摄像机问世 具备面部识别功能(图);新浪科技;《http://tech.sina.com.cn/d/2010-07-28/09104479491.shtml》;20100728;1-2 * |
Also Published As
Publication number | Publication date |
---|---|
CN108921102A (en) | 2018-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108600729B (en) | Dynamic 3D model generation device and image generation method | |
US20210374390A1 (en) | Image processing method and apparatus, and terminal device | |
US20150358539A1 (en) | Mobile Virtual Reality Camera, Method, And System | |
CN106650671A (en) | Human face identification method, apparatus and system | |
CN108347505B (en) | Mobile terminal with 3D imaging function and image generation method | |
WO2015085406A1 (en) | Systems and methods for producing panoramic and stereoscopic videos | |
CN108391116B (en) | Whole body scanning device and method based on 3D imaging technology | |
JP2009239347A (en) | Image processor and image processing program | |
CN108921102B (en) | 3D image processing method and device | |
CN111161399B (en) | Data processing method and assembly for generating three-dimensional model based on two-dimensional image | |
CN109241947A (en) | Information processing unit and method for the monitoring of stream of people's momentum | |
CN105590106A (en) | Novel face 3D expression and action identification system | |
CN108848366B (en) | Information acquisition device and method based on 3D camera | |
CN108234872A (en) | Mobile terminal and its photographic method | |
CN111192305B (en) | Method and apparatus for generating three-dimensional image | |
CN108513122B (en) | Model adjusting method and model generating device based on 3D imaging technology | |
CN108737808B (en) | 3D model generation device and method | |
CN104780341B (en) | A kind of information processing method and information processing unit | |
CN111629194B (en) | Method and system for converting panoramic video into 6DOF video based on neural network | |
CN106101824B (en) | Information processing method, electronic equipment and server | |
CN109657702B (en) | 3D depth semantic perception method and device | |
CN109657559B (en) | Point cloud depth perception coding engine device | |
CN109272453B (en) | Modeling device and positioning method based on 3D camera | |
Ford et al. | Subjective video quality assessment methods for recognition tasks | |
CN108419071B (en) | Shooting device and method based on multiple 3D cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230728 Address after: 201703 Room 2134, Floor 2, No. 152 and 153, Lane 3938, Huqingping Road, Qingpu District, Shanghai Patentee after: Shanghai Qingyan Heshi Technology Co.,Ltd. Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai Patentee before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd. |