CN110188616B - Space modeling method and device based on 2D and 3D images - Google Patents

Space modeling method and device based on 2D and 3D images Download PDF

Info

Publication number
CN110188616B
CN110188616B CN201910367407.0A CN201910367407A CN110188616B CN 110188616 B CN110188616 B CN 110188616B CN 201910367407 A CN201910367407 A CN 201910367407A CN 110188616 B CN110188616 B CN 110188616B
Authority
CN
China
Prior art keywords
image
sub
module
point cloud
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910367407.0A
Other languages
Chinese (zh)
Other versions
CN110188616A (en
Inventor
吴跃华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qingyan Heshi Technology Co ltd
Original Assignee
Shanghai Onwing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Onwing Information Technology Co Ltd filed Critical Shanghai Onwing Information Technology Co Ltd
Priority to CN201910367407.0A priority Critical patent/CN110188616B/en
Publication of CN110188616A publication Critical patent/CN110188616A/en
Application granted granted Critical
Publication of CN110188616B publication Critical patent/CN110188616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The invention discloses a space modeling method and a device based on 2D and 3D images, wherein the space modeling method comprises the following steps: acquiring a 2D image and a 3D point cloud of a shooting target, wherein the shooting directions and the time of the 2D image and the 3D point cloud correspond to each other; acquiring a 3D image according to the 2D image and the 3D point cloud; acquiring target feature points on the 2D image and depth information of the target feature points on the 3D image; and identifying the identity of the shooting target by using the target feature point and the depth information. The space modeling device and method based on the 2D and 3D images can identify the objects in the space images and add identity information to the objects, and the identification images are more in variety, wider in identification range and higher in identification precision.

Description

Space modeling method and device based on 2D and 3D images
Technical Field
The invention relates to a space modeling method and device based on 2D and 3D images.
Background
Spatial modeling is a biometric technique for identifying a person based on facial feature information of the person. A series of related technologies, also commonly called face recognition and face recognition, are used to capture an image or video stream containing a face with a camera or a video camera, automatically detect and track the face in the image, and then perform face recognition on the detected face.
The research of the spatial modeling system starts in the 60 s of the 20 th century, the development of the computer technology and the optical imaging technology is improved after the 80 s, and the research really enters the early application stage in the later 90 s and mainly takes the technical realization of the United states, germany and Japan as the main point; the key of success of the space modeling system lies in whether the space modeling system has a core algorithm of a tip, and the identification result has practical identification rate and identification speed; the space modeling system integrates various professional technologies such as artificial intelligence, machine recognition, machine learning, model theory, expert system and video image processing, and simultaneously needs to combine the theory and realization of intermediate value processing, is the latest application of biological feature recognition, and the realization of the core technology shows the conversion of weak artificial intelligence to strong artificial intelligence.
In the prior art, the space modeling is not accurate enough, errors are easy to occur, and the application field is narrow.
Disclosure of Invention
The invention aims to overcome the defects of insufficient accuracy, easy error and narrow application field in the prior art by spatial modeling, and provides a 2D and 3D image-based spatial modeling method and device which can identify objects in a spatial image and add identity information to the objects, and has more types of identification images, wider identification range and higher identification precision.
The invention solves the technical problems through the following technical scheme:
a space modeling method based on 2D and 3D images is characterized by comprising the following steps:
acquiring a 2D image and a 3D point cloud of a target area, wherein the shooting directions and the time of the 2D image and the 3D point cloud correspond to each other;
dividing the 2D image and the 3D point cloud into a plurality of sub-areas, wherein the sub-areas of the 2D image correspond to the sub-areas of the 3D point cloud one by one;
identifying the identity of the object model in each sub-region;
and adding identity information to the object model in the sub-region.
Preferably, the dividing the 2D image and the 3D point cloud into a plurality of sub-regions includes:
acquiring a 3D image according to the 2D image and the 3D point cloud;
acquiring all continuous lines in the 3D point cloud;
for any two lines, obtaining the minimum distance of the lines;
when the minimum distance is larger than a preset value, dividing sub-areas of the 3D point cloud according to the central point of the line corresponding to the minimum distance larger than the preset value;
and acquiring a sub-region of the 2D image corresponding to the sub-region of the 3D point cloud according to the 3D image.
Preferably, the spatial modeling method includes:
acquiring a wall model and wall lines of the target area according to the 3D point cloud;
the dividing of the 2D image and the 3D point cloud into a plurality of sub-regions comprises:
and acquiring the minimum distance of the lines for any two continuous lines except the wall line.
Preferably, the identifying the identity of the object model in each sub-region comprises:
for each sub-region, acquiring a 2D image of the sub-region;
generating a fingerprint character string of the 2D image;
and searching the identity of the 2D image according to the fingerprint character string.
Preferably, the identifying the identity of the object model in each sub-region comprises:
for each subarea, acquiring a 3D subarea image according to the 2D image and the 3D point cloud;
generating a plurality of 2D projection images corresponding to the sub-regions through projection of the 3D sub-region images in a plurality of directions;
generating a fingerprint character string of each 2D projection image;
searching the identity of the 2D projection image according to each fingerprint character string;
and counting the identities, and taking the identity with the most counting as the identity information of the object model in the sub-area.
The invention also provides a space modeling device based on 2D and 3D images, which is characterized in that the space modeling device comprises a 2D lens, a 3D lens and a processor, the processor comprises a dividing module, an identification module and an adding module,
the 2D lens and the 3D lens are respectively used for acquiring a 2D image and a 3D point cloud of a target area, and the shooting directions and the time of the 2D image and the 3D point cloud correspond to each other;
the dividing module is used for dividing the 2D image and the 3D point cloud into a plurality of sub-areas, and the sub-areas of the 2D image correspond to the sub-areas of the 3D point cloud one by one;
the identification module is used for identifying the identity of the object model in each sub-region;
the adding module is used for adding identity information to the object models in the sub-regions.
Preferably, the processor comprises an obtaining module, a processing module, a calculating module and a matching module,
the acquisition module is used for acquiring a 3D image according to the 2D image and the 3D point cloud;
the processing module is used for acquiring all continuous lines in the 3D point cloud;
for any two lines, the calculation module is used for acquiring the minimum distance of the lines;
when the minimum distance is larger than a preset value, the dividing module is used for dividing sub-areas of the 3D point cloud according to the central point of the line corresponding to the minimum distance larger than the preset value;
the matching module is used for acquiring a subarea of the 2D image corresponding to the subarea of the 3D point cloud according to the 3D image.
Preferably, the identification module is further configured to obtain a wall model and wall lines of the target area according to the 3D point cloud;
the calculation module is used for acquiring the minimum distance of the lines for any two continuous lines except the wall line.
Preferably, the processor includes an obtaining module, a generating module and a searching module,
for each sub-region, the acquisition module is configured to acquire a 2D image of the sub-region;
the generation module is used for generating a fingerprint character string of the 2D image;
the searching module is used for searching the identity of the 2D image according to the fingerprint character string.
Preferably, the processor includes a matching module, a projection module and a statistic module,
for each subarea, the matching module is used for acquiring a 3D subarea image according to the 2D image and the 3D point cloud;
the projection module is used for generating a plurality of 2D projection images corresponding to the sub-regions through projection of the 3D sub-region images in a plurality of directions;
the generation module is used for generating a fingerprint character string of each 2D projection image;
the searching module is used for searching the identity of the 2D projection image according to each fingerprint character string;
and the counting module is used for counting the identities and taking the identity with the most counting as the identity information of the object model in the sub-area.
On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the invention.
The positive progress effects of the invention are as follows:
the space modeling device and method based on the 2D and 3D images can identify the objects in the space images and add identity information to the objects, and the identification images are more in variety, wider in identification range and higher in identification precision.
Drawings
Fig. 1 is a flowchart of a spatial modeling method according to embodiment 1 of the present invention.
Fig. 2 is another flowchart of the spatial modeling method according to embodiment 1 of the present invention.
Fig. 3 is a flowchart of a spatial modeling method according to embodiment 2 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
The embodiment provides a space modeling device based on 2D and 3D images, and the space modeling device comprises a 2D lens, a 3D lens and a processor, wherein the processor comprises a dividing module, an obtaining module, a processing module, a calculating module, a matching module, an identifying module and an adding module.
The 2D lens and the 3D lens are respectively used for acquiring a 2D image and a 3D point cloud of a target area, and the shooting directions and the time of the 2D image and the 3D point cloud correspond to each other;
the dividing module is used for dividing the 2D image and the 3D point cloud into a plurality of sub-areas, and the sub-areas of the 2D image correspond to the sub-areas of the 3D point cloud one by one;
the identification module is used for identifying the identity of the object model in each sub-area;
the adding module is used for adding identity information to the object models in the sub-regions.
In particular, the present embodiment provides a specific method for dividing sub-regions, and since the lines in the space are easier to identify and the amount of calculation consumed for identification is small, the lines, such as table edges, television frames, etc., are used.
The acquisition module is used for acquiring a 3D image according to the 2D image and the 3D point cloud;
the processing module is used for acquiring all continuous lines in the 3D point cloud;
for any two lines, the calculation module is used for acquiring the minimum distance of the lines;
when the minimum distance is larger than a preset value, the dividing module is used for dividing sub-areas of the 3D point cloud according to the central point of the line corresponding to the minimum distance larger than the preset value;
the matching module is used for acquiring a subarea of the 2D image corresponding to the subarea of the 3D point cloud according to the 3D image.
Further, the identification module is also used for acquiring a wall model and a wall line of the target area according to the 3D point cloud;
the calculation module is used for acquiring the minimum distance of the lines for any two continuous lines except the wall line.
Referring to fig. 2 and fig. 3, with the spatial modeling apparatus, the present embodiment further provides a spatial modeling method, including:
step 100, acquiring a 2D image and a 3D point cloud of a target area, wherein the shooting directions and the time of the 2D image and the 3D point cloud correspond to each other;
step 101, dividing the 2D image and the 3D point cloud into a plurality of sub-areas, wherein the sub-areas of the 2D image correspond to the sub-areas of the 3D point cloud one by one;
step 102, identifying the identity of an object model in each sub-region;
and 103, adding identity information to the object model in the sub-region.
Wherein step 101 comprises:
step 1011, acquiring a 3D image according to the 2D image and the 3D point cloud;
step 1012, acquiring all continuous lines in the 3D point cloud;
step 1013, for any two lines, obtaining the minimum distance of the lines;
1014, when the minimum distance is larger than a preset value, dividing sub-areas of the 3D point cloud according to the central point of the line corresponding to the minimum distance larger than the preset value;
and step 1015, obtaining a sub-region of the 2D image corresponding to the sub-region of the 3D point cloud according to the 3D image.
Wherein, step 1011 and step 1012 include therebetween
Acquiring a wall model and wall lines of the target area according to the 3D point cloud;
step 1013 is specifically:
and acquiring the minimum distance of the lines for any two continuous lines except the wall line.
Example 2
This embodiment is substantially the same as embodiment 1 except that:
the processor comprises an acquisition module, a generation module, a matching module, a projection module, a statistic module and a search module.
For each sub-region, the acquisition module is configured to acquire a 2D image of the sub-region;
the generation module is used for generating a fingerprint character string of the 2D image;
the searching module is used for searching the identity of the 2D image according to the fingerprint character string.
Further, for each sub-region, the matching module is used for acquiring a 3D sub-region image according to the 2D image and the 3D point cloud;
the projection module is used for generating a plurality of 2D projection images corresponding to the sub-regions through projection of the 3D sub-region images in a plurality of directions;
the generation module is used for generating a fingerprint character string of each 2D projection image;
the searching module is used for searching the identity of the 2D projection image according to each fingerprint character string;
and the counting module is used for counting the identities and taking the identity with the most counting as the identity information of the object model in the sub-area.
Referring to fig. 3, with the above spatial modeling apparatus, this embodiment further provides a spatial modeling method, where the identifying the identity of the object model in each sub-region specifically includes:
step 200, for each sub-region, acquiring a 2D image of the sub-region;
step 201, generating a fingerprint character string of the 2D image;
step 202, searching the identity of the 2D image according to the fingerprint character string.
The step 200 specifically includes:
for each subarea, acquiring a 3D subarea image according to the 2D image and the 3D point cloud;
generating a plurality of 2D projection images corresponding to the sub-regions through projection of the 3D sub-region images in a plurality of directions;
step 201 is to generate a fingerprint character string of each 2D projection image;
step 202 comprises:
searching the identity of the 2D projection image according to each fingerprint character string;
and counting the identities, and taking the identity with the most counting as the identity information of the object model in the sub-area.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the invention is defined by the appended claims. Various changes or modifications to these embodiments may be made by those skilled in the art without departing from the principle and spirit of this invention, and these changes and modifications are within the scope of this invention.

Claims (6)

1. A spatial modeling method based on 2D and 3D images is characterized by comprising the following steps:
acquiring a 2D image and a 3D point cloud of a target area, wherein the shooting directions and the time of the 2D image and the 3D point cloud correspond to each other;
dividing the 2D image and the 3D point cloud into a plurality of sub-areas, wherein the sub-areas of the 2D image correspond to the sub-areas of the 3D point cloud one by one;
identifying the identity of the object model in each sub-region;
adding identity information to the object models in the sub-regions;
wherein, divide 2D image and 3D point cloud into a plurality of subregions, include:
acquiring a 3D image according to the 2D image and the 3D point cloud;
acquiring a wall model and wall lines of the target area according to the 3D point cloud;
acquiring all continuous lines in the 3D point cloud;
acquiring the minimum distance of the lines for any two continuous lines except the wall line;
when the minimum distance is larger than a preset value, dividing sub-areas of the 3D point cloud according to the central point of a line corresponding to the minimum distance larger than the preset value;
and acquiring a subarea of the 2D image corresponding to the subarea of the 3D point cloud according to the 3D image.
2. The spatial modeling method of claim 1, wherein said identifying the identity of the object model within each sub-region comprises:
for each sub-region, acquiring a 2D image of the sub-region;
generating a fingerprint character string of the 2D image;
and searching the identity of the 2D image according to the fingerprint character string.
3. The spatial modeling method of claim 2, wherein said identifying the identity of the object model within each sub-region comprises:
for each subarea, acquiring a 3D subarea image according to the 2D image and the 3D point cloud;
generating a plurality of 2D projection images corresponding to the sub-regions through projection of the 3D sub-region images in a plurality of directions;
generating a fingerprint character string of each 2D projection image;
searching the identity of the 2D projection image according to each fingerprint character string;
and counting the identities, and taking the identity with the most counting as the identity information of the object model in the sub-area.
4. A space modeling device based on 2D and 3D images is characterized in that the space modeling device comprises a 2D lens, a 3D lens and a processor, the processor comprises a dividing module, an identification module and an adding module,
the 2D lens and the 3D lens are respectively used for acquiring a 2D image and a 3D point cloud of a target area, and the shooting directions and the time of the 2D image and the 3D point cloud correspond to each other;
the dividing module is used for dividing the 2D image and the 3D point cloud into a plurality of sub-areas, and the sub-areas of the 2D image correspond to the sub-areas of the 3D point cloud one by one;
the identification module is used for identifying the identity of the object model in each sub-region;
the adding module is used for adding identity information to the object model in the sub-region;
the processor also comprises an acquisition module, a processing module, a calculation module and a matching module,
the acquisition module is used for acquiring a 3D image according to the 2D image and the 3D point cloud;
the processing module is used for acquiring all continuous lines in the 3D point cloud;
for any two lines, the calculation module is used for acquiring the minimum distance of the lines;
when the minimum distance is larger than a preset value, the dividing module is used for dividing sub-areas of the 3D point cloud according to the central point of the line corresponding to the minimum distance larger than the preset value;
the matching module is used for acquiring a subarea of a 2D image corresponding to the subarea of the 3D point cloud according to the 3D image;
the identification module is further used for acquiring a wall model and wall lines of the target area according to the 3D point cloud;
the calculation module is used for acquiring the minimum distance of the lines for any two continuous lines except the wall line.
5. The spatial modeling apparatus of claim 4, wherein the processor comprises an acquisition module, a generation module, and a lookup module,
for each sub-region, the acquisition module is configured to acquire a 2D image of the sub-region;
the generation module is used for generating a fingerprint character string of the 2D image;
the searching module is used for searching the identity of the 2D image according to the fingerprint character string.
6. The spatial modeling apparatus of claim 5, wherein the processor comprises a matching module, a projection module, and a statistics module,
for each subarea, the matching module is used for acquiring a 3D subarea image according to the 2D image and the 3D point cloud;
the projection module is used for generating a plurality of 2D projection images corresponding to the sub-regions through projection of the 3D sub-region images in a plurality of directions;
the generation module is used for generating a fingerprint character string of each 2D projection image;
the searching module is used for searching the identity of the 2D projection image according to each fingerprint character string;
and the counting module is used for counting the identities and taking the identity with the most counting as the identity information of the object model in the sub-area.
CN201910367407.0A 2019-05-05 2019-05-05 Space modeling method and device based on 2D and 3D images Active CN110188616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910367407.0A CN110188616B (en) 2019-05-05 2019-05-05 Space modeling method and device based on 2D and 3D images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910367407.0A CN110188616B (en) 2019-05-05 2019-05-05 Space modeling method and device based on 2D and 3D images

Publications (2)

Publication Number Publication Date
CN110188616A CN110188616A (en) 2019-08-30
CN110188616B true CN110188616B (en) 2023-02-28

Family

ID=67715498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910367407.0A Active CN110188616B (en) 2019-05-05 2019-05-05 Space modeling method and device based on 2D and 3D images

Country Status (1)

Country Link
CN (1) CN110188616B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390088A (en) * 2013-07-31 2013-11-13 浙江大学 Full-automatic three-dimensional conversion method aiming at grating architectural plan
CN104809689A (en) * 2015-05-15 2015-07-29 北京理工大学深圳研究院 Building point cloud model and base map aligned method based on outline
CN106778468A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D face identification methods and equipment
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
CN106909873A (en) * 2016-06-21 2017-06-30 湖南拓视觉信息技术有限公司 The method and apparatus of recognition of face
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN107633165A (en) * 2017-10-26 2018-01-26 深圳奥比中光科技有限公司 3D face identity authentications and device
CN107748869A (en) * 2017-10-26 2018-03-02 深圳奥比中光科技有限公司 3D face identity authentications and device
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information
CN108009532A (en) * 2017-12-28 2018-05-08 盎锐(上海)信息科技有限公司 Personal identification method and terminal based on 3D imagings
CN108197549A (en) * 2017-12-28 2018-06-22 盎锐(上海)信息科技有限公司 Face identification method and terminal based on 3D imagings
CN108427871A (en) * 2018-01-30 2018-08-21 深圳奥比中光科技有限公司 3D faces rapid identity authentication method and device
CN108564018A (en) * 2018-04-04 2018-09-21 北京天目智联科技有限公司 A kind of biological characteristic 3D 4 D datas recognition methods and system based on infrared photography
CN108833890A (en) * 2018-08-08 2018-11-16 盎锐(上海)信息科技有限公司 Data processing equipment and method based on camera
CN108881888A (en) * 2018-08-08 2018-11-23 盎锐(上海)信息科技有限公司 Data processing equipment and method based on image collecting device
CN109241947A (en) * 2018-10-15 2019-01-18 盎锐(上海)信息科技有限公司 Information processing unit and method for the monitoring of stream of people's momentum
CN109269405A (en) * 2018-09-05 2019-01-25 天目爱视(北京)科技有限公司 A kind of quick 3D measurement and comparison method
CN109377551A (en) * 2018-10-16 2019-02-22 北京旷视科技有限公司 A kind of three-dimensional facial reconstruction method, device and its storage medium
CN109523628A (en) * 2018-11-13 2019-03-26 盎锐(上海)信息科技有限公司 Video generation device and method
CN109657702A (en) * 2018-11-23 2019-04-19 盎锐(上海)信息科技有限公司 3D deep semantic perception algorithm and device

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390088A (en) * 2013-07-31 2013-11-13 浙江大学 Full-automatic three-dimensional conversion method aiming at grating architectural plan
CN104809689A (en) * 2015-05-15 2015-07-29 北京理工大学深圳研究院 Building point cloud model and base map aligned method based on outline
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN106909873A (en) * 2016-06-21 2017-06-30 湖南拓视觉信息技术有限公司 The method and apparatus of recognition of face
CN106778468A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D face identification methods and equipment
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information
CN107633165A (en) * 2017-10-26 2018-01-26 深圳奥比中光科技有限公司 3D face identity authentications and device
CN107748869A (en) * 2017-10-26 2018-03-02 深圳奥比中光科技有限公司 3D face identity authentications and device
CN108197549A (en) * 2017-12-28 2018-06-22 盎锐(上海)信息科技有限公司 Face identification method and terminal based on 3D imagings
CN108009532A (en) * 2017-12-28 2018-05-08 盎锐(上海)信息科技有限公司 Personal identification method and terminal based on 3D imagings
CN108427871A (en) * 2018-01-30 2018-08-21 深圳奥比中光科技有限公司 3D faces rapid identity authentication method and device
CN108564018A (en) * 2018-04-04 2018-09-21 北京天目智联科技有限公司 A kind of biological characteristic 3D 4 D datas recognition methods and system based on infrared photography
CN108833890A (en) * 2018-08-08 2018-11-16 盎锐(上海)信息科技有限公司 Data processing equipment and method based on camera
CN108881888A (en) * 2018-08-08 2018-11-23 盎锐(上海)信息科技有限公司 Data processing equipment and method based on image collecting device
CN109269405A (en) * 2018-09-05 2019-01-25 天目爱视(北京)科技有限公司 A kind of quick 3D measurement and comparison method
CN109241947A (en) * 2018-10-15 2019-01-18 盎锐(上海)信息科技有限公司 Information processing unit and method for the monitoring of stream of people's momentum
CN109377551A (en) * 2018-10-16 2019-02-22 北京旷视科技有限公司 A kind of three-dimensional facial reconstruction method, device and its storage medium
CN109523628A (en) * 2018-11-13 2019-03-26 盎锐(上海)信息科技有限公司 Video generation device and method
CN109657702A (en) * 2018-11-23 2019-04-19 盎锐(上海)信息科技有限公司 3D deep semantic perception algorithm and device

Also Published As

Publication number Publication date
CN110188616A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
Kliper-Gross et al. Motion interchange patterns for action recognition in unconstrained videos
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
CN104050449B (en) A kind of face identification method and device
CN110705478A (en) Face tracking method, device, equipment and storage medium
CN110428449B (en) Target detection tracking method, device, equipment and storage medium
JP2016001447A (en) Image recognition system, image recognition device, image recognition method and computer program
CN109325456A (en) Target identification method, device, target identification equipment and storage medium
CN107832736B (en) Real-time human body action recognition method and real-time human body action recognition device
CN109858433B (en) Method and device for identifying two-dimensional face picture based on three-dimensional face model
CN104517101A (en) Game poker card recognition method based on pixel square difference matching
CN111429476B (en) Method and device for determining action track of target person
WO2021031446A1 (en) Offline individual handwriting recognition system and method employing two-dimensional dynamic feature
CN111008935A (en) Face image enhancement method, device, system and storage medium
CN110490153B (en) Offline handwriting individual recognition system and method based on three-dimensional dynamic characteristics
US8948461B1 (en) Method and system for estimating the three dimensional position of an object in a three dimensional physical space
CN108416800A (en) Method for tracking target and device, terminal, computer readable storage medium
CN104104911B (en) Timestamp in panoramic picture generating process is eliminated and remapping method and system
CN110188616B (en) Space modeling method and device based on 2D and 3D images
CN112270748A (en) Three-dimensional reconstruction method and device based on image
CN112257666B (en) Target image content aggregation method, device, equipment and readable storage medium
CN115035546A (en) Three-dimensional human body posture detection method and device and electronic equipment
CN114719759B (en) Object surface perimeter and area measurement method based on SLAM algorithm and image instance segmentation technology
CN113435342B (en) Living body detection method, living body detection device, living body detection equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230103

Address after: 200120 room 607, building 2, No. 2555, xiupu Road, Pudong New Area, Shanghai

Applicant after: SHANGHAI ONWING INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai

Applicant before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230725

Address after: 201703 Room 2134, Floor 2, No. 152 and 153, Lane 3938, Huqingping Road, Qingpu District, Shanghai

Patentee after: Shanghai Qingyan Heshi Technology Co.,Ltd.

Address before: 200120 room 607, building 2, No. 2555, xiupu Road, Pudong New Area, Shanghai

Patentee before: SHANGHAI ONWING INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right