WO2024001959A1 - Procédé et appareil de traitement de balayage, et dispositif électronique et support d'enregistrement - Google Patents

Procédé et appareil de traitement de balayage, et dispositif électronique et support d'enregistrement Download PDF

Info

Publication number
WO2024001959A1
WO2024001959A1 PCT/CN2023/102125 CN2023102125W WO2024001959A1 WO 2024001959 A1 WO2024001959 A1 WO 2024001959A1 CN 2023102125 W CN2023102125 W CN 2023102125W WO 2024001959 A1 WO2024001959 A1 WO 2024001959A1
Authority
WO
WIPO (PCT)
Prior art keywords
current frame
frame image
verification
image
result
Prior art date
Application number
PCT/CN2023/102125
Other languages
English (en)
Chinese (zh)
Inventor
赵斌涛
江腾飞
张健
林忠威
Original Assignee
先临三维科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 先临三维科技股份有限公司 filed Critical 先临三维科技股份有限公司
Publication of WO2024001959A1 publication Critical patent/WO2024001959A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10008Still image; Photographic image from scanner, fax or copier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the field of scanning technology, and in particular, to a scanning processing method, device, electronic equipment and storage medium.
  • 3D scanners can realize 3D scanning of objects and are widely used in fields such as machinery and medical plastic surgery.
  • the technical problem to be solved by this disclosure is to solve the existing problem of inaccurate real-time scanning results due to non-rigid factors of the scanning object, thereby affecting the three-dimensional model reconstruction effect.
  • embodiments of the present disclosure provide a scanning processing method, device, electronic device, and storage medium.
  • an embodiment of the present disclosure provides a scanning processing method, which method includes:
  • the current frame image is marked as a valid scanned image.
  • embodiments of the present disclosure also provide a scanning processing device, which includes:
  • the first acquisition module is used to acquire the current frame image
  • a splicing module used to rigidly splice the current frame image and historical image data to obtain a first splicing result
  • a first verification module configured to perform non-rigid body verification on the current frame image and the historical image data to obtain a verification result when the error of the first splicing result is less than a preset first threshold;
  • a marking module configured to mark the current frame image as a valid scanned image when the verification result is successful.
  • embodiments of the present disclosure further provide an electronic device, which includes: a processor; a memory for storing instructions executable by the processor; and the processor, configured to retrieve instructions from the memory.
  • the executable instructions are read and executed to implement the scan processing method provided by the embodiment of the first aspect of the present disclosure.
  • an embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the scanning processing method provided by the embodiment of the first aspect of the present disclosure.
  • the scanning processing solution acquires the current frame image, performs rigid body splicing on the current frame image and historical image data, and obtains the first splicing result.
  • the error of the first splicing result is less than the preset first threshold
  • Perform non-rigid body verification on the current frame image and historical image data to obtain the verification result.
  • the verification result is that the verification is successful
  • the current frame image is marked as a valid scanned image.
  • the current frame image will be The frame image is marked as a valid scan image, enabling non-rigid body scanning at a faster frame rate and efficiency, and subsequent 3D model reconstruction based on the valid scan image, further improving the accuracy and reliability of 3D model reconstruction.
  • Figure 1 is a schematic flowchart of a scanning processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another scanning processing method provided by an embodiment of the present disclosure.
  • Figure 3 is a schematic structural diagram of a scanning processing device provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 1 is a schematic flowchart of a scanning processing method provided by an embodiment of the present disclosure.
  • the method can be executed by a scanning processing device, where the device can be implemented using software and/or hardware, and can generally be integrated in an electronic device. As shown in Figure 1, the method includes:
  • Step 101 Obtain the current frame image, perform rigid body splicing of the current frame image and historical image data, and obtain the first splicing result.
  • the current frame image refers to the Nth frame image obtained by continuously moving the camera during the real-time scanning process.
  • N is a positive integer greater than 1, that is, the current frame image can be the 2nd frame image. frame image, 3rd frame image, etc.
  • Historical image data refers to a frame of image that has been effectively scanned before acquiring the current frame image or an image that has been effectively scanned and spliced into multiple frames.
  • the current 3rd frame image means that the scanner has captured the 3rd frame image and determined Two frames of images have been effectively scanned, and the spliced image of the previous two frames can be selected as historical image data.
  • the first splicing result refers to the result of rigid body splicing of the current frame image and the historical image data, which is the spliced image frame, including the overlapping area between the current frame image and the historical image data.
  • the current frame image and historical image data are rigidly spliced, and there are many ways to obtain the first splicing result.
  • the same features between the current frame image and historical image data are obtained,
  • the overlapping area is determined based on the same characteristics, and the current frame image and the historical image data are rigidly spliced based on the overlapping area to obtain the first splicing result.
  • a matching feature point pair between the current frame image and historical image data is obtained, a transformation matrix is determined based on the matching feature point pair, and the current frame image is transformed based on the transformation matrix and spliced with the historical image data to obtain the first A splicing result.
  • Step 102 When the error of the first splicing result is less than the preset first threshold, perform non-rigid body verification on the current frame image and historical image data to obtain the verification result.
  • the error of the first splicing result refers to the error between the observation position of the same point in the image frame after splicing and the observation position of the image frame before splicing, that is, the error between the observation positions of the same point in different image frames.
  • the observation position is affected by the position and deformation field of the image frame.
  • the error of the first splicing result is calculated, and the error of the first splicing result is compared with the first threshold.
  • the first threshold can be selectively set according to the needs of the application scenario, such as adjusting the first threshold based on the accuracy of the scanning device, etc.
  • rigid splicing means splicing using overlapping areas of images, such as scanning objects.
  • the overlap area between the current frame image captured and the historical image data will become smaller, so non-rigid verification is required. Therefore, the error in the first stitching result is less than
  • the preset first threshold is set, non-rigid body verification needs to be performed on the current frame image and historical image data.
  • non-rigid body verification is performed on the current frame image and historical image data, and there are many ways to obtain the verification results.
  • the frame positions and corresponding frame positions of the current frame image and historical image data are respectively compared. / Or the deformation field is adjusted multiple times. After each adjustment, non-rigid body splicing is performed based on the adjusted current frame image and historical image data. If the adjusted splicing result error is less than a certain threshold, it means that the verification is passed. Otherwise, the verification is deemed to have failed.
  • the image frame position between the current frame image and historical image data is adjusted, and rigid body splicing is performed based on the adjusted current frame image and historical image data. If the adjusted splicing result error is less than a certain threshold, , it means the verification passed, otherwise it is considered that the verification failed.
  • the above two methods are only examples of performing non-rigid body verification on the current frame image and historical image data to obtain the verification results.
  • the embodiments of the present disclosure do not perform non-rigid body verification on the current frame image and historical image data to obtain the verification results. specific restrictions.
  • Step 103 When the verification result is successful, mark the current frame image as a valid scanned image.
  • the verification result when the verification result is successful, it means that the current frame image acquired is valid, and the current frame image is further marked as a valid scan image, so that subsequent three-dimensional model reconstruction can be performed based on the valid scan image, further improving the accuracy and reliability of three-dimensional model reconstruction. Improved the scanning effect and efficiency of non-rigid body scanning scenes.
  • the error of the first splicing result is greater than or equal to the first threshold or the verification result is verification failure, it is necessary to further perform matching and other processing based on the characteristics of the current frame image and the global characteristics, so as to further determine whether the current frame image retains Or discard it to further improve scanning accuracy.
  • the scanning processing solution acquires the current frame image, performs rigid body splicing on the current frame image and historical image data, and obtains the first splicing result.
  • the The current frame image and historical image data are Perform non-rigid body verification and obtain the verification result.
  • the verification result is that the verification is successful
  • the current frame image is marked as a valid scanned image.
  • the current frame image can be verified as non-rigid body.
  • the current frame image will be marked as a valid scanning image, achieving faster frame rate and efficiency.
  • Non-rigid body scanning and subsequent 3D model reconstruction based on valid scanned images further improve the accuracy and reliability of 3D model reconstruction.
  • FIG. 2 is a schematic flowchart of another scanning processing method provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment further optimizes the above scanning processing method. As shown in Figure 2, the method includes:
  • Step 201 Obtain the current frame image, perform rigid body splicing of the current frame image and historical image data, and obtain the first splicing result.
  • step 201 is the same as step 101.
  • step 101 please refer to the description of step 101, which will not be described in detail here.
  • Step 202 When the error of the first splicing result is less than the preset first threshold, the frame position and deformation field corresponding to the current frame image and the historical image data are respectively adjusted according to the preset number of adjustments. After each adjustment, Rigid body splicing is performed based on the adjusted current frame image and historical image data to obtain a second splicing result.
  • the deformation field consists of a small number of control points and is used to control the point cloud to deform.
  • the frame image refers to the depth point cloud and landmark points seen by the camera at a certain position.
  • each frame contains an independent deformation field.
  • Frame position refers to the six dimensions corresponding to non-rigid body changes (can rotate and translate).
  • the relationship between image frames can be adjusted, so that the second splicing result is different.
  • the preset number of adjustments can be selected and set according to the needs of the application scenario.
  • the frame position and deformation field of the current frame image and historical image data are cyclically optimized multiple times (because after the position and deformation field change, the correspondence between frames will also change; times is a parameter, for example, taken 5 times), if the error of the second splicing result can be reduced to less than the given second threshold, then the non-rigid body verification is successful, otherwise the non-rigid body verification fails.
  • the second threshold can be set as needed, for example, adjusted according to the accuracy of the scanning device.
  • the adjusted current frame image and historical image data are rigidly spliced to obtain the second splicing result.
  • the current frame image and historical image data are rigidly spliced to obtain the first splicing result in the same way. Specifically, Refer to the detailed description of performing rigid body splicing of the current frame image and historical image data to obtain the first splicing result, which will not be described in detail here.
  • Step 203 When the error of the second splicing result is less than the preset second threshold, it is determined that the verification is successful, and the current frame image is marked as a valid scanned image.
  • Step 204 When the error of the second splicing result is greater than or equal to the second threshold, it is determined that the verification fails, and the current frame image is deleted.
  • the non-rigid body verification when the error of the second splicing result is less than the preset second threshold, it means that the non-rigid body verification is successful, that is, the scanning object is a non-rigid body such as a person. When the person breathes, the scanned chest moves. It will affect the obtained stitching result of the current frame image. Therefore, if the non-rigid body verification is successful, the current frame image will be marked as a valid scanned image.
  • the error of the second splicing result when the error of the second splicing result is greater than or equal to the second threshold, it means that the error of the second splicing result after adjusting the relationship between image frames is still relatively large, so it is determined that the verification failed, and the The current frame image is deleted.
  • Step 205 When the error of the first splicing result is greater than or equal to the first threshold or the verification result is verification failure, obtain the current features of the current frame image, match the current features with the preset stored global features, and obtain the matching result.
  • Step 206 When the matching result is successful, obtain multiple frame images that successfully match the current frame image, perform non-rigid body verification on the current frame image and the multi-frame image, and when the verification is successful, mark the current frame image as Scan images efficiently.
  • Step 207 When the matching result is a matching failure, delete the current frame image.
  • the current features may be features calculated from the depth point cloud, landmark points, and texture maps of the current frame image, such as three-dimensional coordinates, color information or reflection intensity information in the point cloud features, characteristics of landmark points, and The characteristics of the texture map, etc., are selected according to the needs of the application scenario.
  • global features refer to a feature set of all frame images that have been marked as scanned images, and redundant features are removed.
  • the current features of the current frame image are obtained, and the current features are matched with the preset stored global features.
  • the matching result is a successful match
  • multiple frame images that successfully match the current frame image are obtained, and the current Non-rigid body verification is performed on frame images and multi-frame images, and when the verification is successful, the current frame image is marked as a valid scanned image.
  • the non-rigid body verification method is the same as the aforementioned non-rigid body verification method for the current frame image and historical image data. Please refer to the specific description of non-rigid body verification for the current frame image and historical image data, which will not be repeated here. Elaborate.
  • the current frame image is marked as a valid scanned image, and when the matching result is a matching failure, the current frame image is deleted.
  • the current frame image is continuously processed with the same scheme. Taking advantage of the continuous movement of the camera, the current frame image and historical image data are rigidly spliced. If the error of the splicing result is less than the first threshold, then At the position after rigid body splicing, the current frame image and historical image data undergo non-rigid body verification. If the non-rigid body verification passes, the current frame image is tracked successfully and the tracking process ends.
  • the features of the current frame image are calculated and matched with the global features. If the matching fails, the tracking of the current frame image fails and is discarded; if the match is successful, the image with the current frame is selected. All frame images with successful image matching undergo non-rigid body verification. If the non-rigid body verification passes, the current frame is tracked successfully. If the non-rigid body verification fails, the current frame image fails to track and is discarded.
  • step 203 After step 203, step 206 or step 207, step 208 or step 209 or step 210 may be performed.
  • Step 208 Update the global features based on the current features to obtain the updated global features and store them.
  • the global features are updated based on the current features to obtain the updated global features and store them.
  • the current features are merged into the global features to obtain updated global features.
  • the updated global features can also be de-redundant. Store after processing.
  • Step 209 Acquire all valid scanned images, perform feature extraction on all valid scanned images, obtain initial global features, perform redundant processing on the initial global features, obtain the target global features and store them.
  • all valid scanned images refer to scanned images marked as valid after processing in all the foregoing embodiments.
  • Feature extraction is performed on all valid scanned images to obtain initial global features, and redundancy processing is performed on the initial global features.
  • the global characteristics of the target are obtained and stored, and can be directly called during subsequent matching to further improve the efficiency of scanning processing.
  • Step 210 obtain all valid scanned images, and adjust the frame positions and deformation fields corresponding to all valid scanned images, so that the error between the observation positions of the same scanned position point in different frame images is less than the preset third threshold; where , the third threshold is smaller than the first threshold.
  • the third threshold can be selectively set according to the needs of the application scenario.
  • the error between image frames is as small as possible, solving the problem of inability to loop or loopback errors caused by deformation and accumulated errors, and optimizing global features.
  • the function is to change the position and deformation field of the effective scanned image in all frames, which is equivalent to acting as a global deformation field.
  • the scanning processing solution provided by the embodiment of the present disclosure acquires the current frame image, rigidly splices the current frame image and historical image data to obtain the first splicing result.
  • the frame position and deformation field corresponding to the current frame image and the historical image data are adjusted according to a preset number of adjustments.
  • rigid body splicing is performed based on the adjusted current frame image and historical image data to obtain
  • For the second splicing result when the error of the second splicing result is less than the preset second threshold, it is determined that the verification is successful, and the current frame image is marked as a valid scanned image.
  • the matching result is successful, multiple frame images that successfully match the current frame image are obtained, non-rigid body verification is performed on the current frame image and the multi-frame image, and when the verification is successful, the current frame image is marked as valid Scan the image, and when the matching result is a matching failure or when the non-rigid body verification of the current frame image and multi-frame images fails, the current frame image is deleted.
  • Obtain all valid scanned images perform feature extraction on all valid scanned images, obtain initial global features, perform deduplication processing on the initial global features, obtain the target global features and store them, obtain all valid scanned images, and extract the frames corresponding to all valid scanned images.
  • the position and deformation field are adjusted so that the error between the observation positions of the same scanning position point in different frame images is less than a preset third threshold; wherein the third threshold is less than the first threshold.
  • FIG. 3 is a schematic structural diagram of a scanning processing device provided by an embodiment of the present disclosure.
  • the device can be implemented by software and/or hardware, and can generally be integrated in electronic equipment. As shown in Figure 3, the device includes:
  • the first acquisition module 301 is used to acquire the current frame image
  • the splicing module 302 is used to rigidly splice the current frame image and historical image data to obtain the first splicing result
  • the first verification module 303 is configured to perform non-rigid body verification on the current frame image and the historical image data to obtain a verification result when the error of the first splicing result is less than the preset first threshold;
  • the marking module 304 is configured to mark the current frame image as a valid scanned image when the verification result is successful.
  • the first verification module 303 is specifically used for:
  • the device also includes:
  • a second acquisition module configured to acquire the current characteristics of the current frame image when the error of the first splicing result is greater than or equal to the first threshold or the verification result is a verification failure
  • a matching module used to match the current features with preset stored global features to obtain a matching result
  • a third acquisition module configured to acquire multiple frames of images that successfully match the current frame image when the matching result is a successful match
  • the second verification module is configured to perform non-rigid body verification on the current frame image and the multi-frame images, and mark the current frame image as a valid scanned image when the verification is successful.
  • the device also includes:
  • a deletion module configured to delete the current frame image when the matching result is a matching failure or when the non-rigid body verification of the current frame image and the multi-frame image fails.
  • the device also includes:
  • An update module configured to update the global features based on the current features, obtain and store the updated global features.
  • the device also includes:
  • the third acquisition module is used to acquire all valid scanned images
  • An extraction module used to extract features from all valid scanned images to obtain initial global features
  • the deduplication module is used to deduplicate the initial global features to obtain and store the target global features.
  • the device also includes:
  • the fourth acquisition module is used to acquire all valid scanned images
  • the adjustment module is used to adjust the frame positions and deformation fields corresponding to all valid scanned images, so that the error between the observation positions of the same scan position point in different frame images is less than a preset third threshold; wherein, The third threshold is smaller than the first threshold.
  • the scan processing device provided by the embodiment of the present disclosure can execute the scan processing method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • Embodiments of the present disclosure also provide a computer program product, including a computer program/instruction Let the computer program/instruction implement the scanning processing method provided by any embodiment of the present disclosure when executed by the processor.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, laptops, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 4 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 400 may include a processing device (eg, central processing unit, graphics processor, etc.) 401, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 402 or from a storage device 408.
  • the program in the memory (RAM) 403 executes various appropriate actions and processes.
  • various programs and data required for the operation of the electronic device 400 are also stored.
  • the processing device 401, ROM 402 and RAM 403 are connected to each other via a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404.
  • the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 407 such as a computer; a storage device 408 including a magnetic tape, a hard disk, etc.; and a communication device 409.
  • the communication device 409 may allow the electronic device 400 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 4 illustrates electronic device 400 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 409, Either it is installed from the storage device 408 or it is installed from the ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the scan processing method of the embodiment of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmd read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and can communicate with digital data in any form or medium.
  • Data communications e.g., communications network
  • communications networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device When the one or more programs are executed by the electronic device, the electronic device: acquires the current frame image, performs rigid body splicing of the current frame image and historical image data, and obtains For the first splicing result, when the error of the first splicing result is less than the preset first threshold, non-rigid body verification is performed on the current frame image and the historical image data to obtain the verification result.
  • the verification result is that the verification is successful, Mark the current frame image as a valid scanned image.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C” or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • the present disclosure provides an electronic device, including:
  • memory for storing instructions executable by the processor
  • the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the scanning processing methods provided by this disclosure.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, the computer program is used to perform any one of the scanning provided by the present disclosure. Approach.
  • the scanning processing method provided by the present disclosure can perform non-rigid body verification on the current frame image during the non-rigid body real-time scanning process.
  • the verification is successful, the current frame image will be marked as a valid scanning image, achieving a faster frame rate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

Les modes de réalisation de la présente divulgation concernent un procédé et un appareil de traitement de balayage, et un dispositif électronique et un support d'enregistrement. Le procédé consiste à : acquérir l'image de trame actuelle et réaliser un épissage de corps rigide sur l'image de trame actuelle et des données d'image historiques de façon à obtenir un premier résultat d'épissage; lorsqu'une erreur du premier résultat d'épissage est inférieure à une première valeur de seuil prédéfinie, réaliser une vérification de corps non rigide sur l'image de trame actuelle et les données d'image historiques de façon à obtenir un résultat de vérification; et lorsque le résultat de vérification indique que la vérification est réussie, marquer l'image de trame actuelle comme image de balayage efficace. Au moyen de la solution technique, pendant un processus de balayage en temps réel de corps non rigide, une vérification de corps non rigide peut être réalisée sur l'image de trame actuelle, et lorsque la vérification est réussie, l'image de trame actuelle est marquée comme image de balayage efficace, de sorte qu'un balayage de corps non rigide est réalisé à une fréquence et avec une efficacité de trame relativement élevées; et une reconstruction de modèle tridimensionnel ultérieure est réalisée sur la base de l'image de balayage efficace, ce qui permet d'améliorer davantage la précision et la fiabilité de la reconstruction de modèle tridimensionnel.
PCT/CN2023/102125 2022-06-28 2023-06-25 Procédé et appareil de traitement de balayage, et dispositif électronique et support d'enregistrement WO2024001959A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210753949.3A CN115115593B (zh) 2022-06-28 2022-06-28 一种扫描处理方法、装置、电子设备及存储介质
CN202210753949.3 2022-06-28

Publications (1)

Publication Number Publication Date
WO2024001959A1 true WO2024001959A1 (fr) 2024-01-04

Family

ID=83331502

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/102125 WO2024001959A1 (fr) 2022-06-28 2023-06-25 Procédé et appareil de traitement de balayage, et dispositif électronique et support d'enregistrement

Country Status (2)

Country Link
CN (1) CN115115593B (fr)
WO (1) WO2024001959A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115593B (zh) * 2022-06-28 2024-09-10 先临三维科技股份有限公司 一种扫描处理方法、装置、电子设备及存储介质
CN116152474A (zh) * 2022-12-02 2023-05-23 先临三维科技股份有限公司 扫描数据的处理方法、装置、设备及介质
CN116541576B (zh) * 2023-07-06 2023-09-29 浙江档科信息技术有限公司 基于大数据应用的档案数据管理标注方法及系统
CN118379469A (zh) * 2024-05-28 2024-07-23 先临三维科技股份有限公司 扫描方法、电子设备和计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293608A1 (en) * 2011-05-17 2012-11-22 Apple Inc. Positional Sensor-Assisted Perspective Correction for Panoramic Photography
CN103534726A (zh) * 2011-05-17 2014-01-22 苹果公司 用于全景摄影的位置传感器辅助的图像配准
CN103606189A (zh) * 2013-11-19 2014-02-26 浙江理工大学 一种面向非刚体三维重建的轨迹基选择方法
CN108648145A (zh) * 2018-04-28 2018-10-12 北京东软医疗设备有限公司 图像拼接方法及装置
CN110889819A (zh) * 2019-11-29 2020-03-17 上海联影医疗科技有限公司 医学图像扫描方法、装置、设备和存储介质
CN115115593A (zh) * 2022-06-28 2022-09-27 先临三维科技股份有限公司 一种扫描处理方法、装置、电子设备及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101254120B (zh) * 2008-03-17 2010-09-08 北京好望角图像技术有限公司 实时超声宽视野成像方法
CN102324027B (zh) * 2011-05-27 2013-05-29 汉王科技股份有限公司 扫描识别装置和方法
CN105913492B (zh) * 2016-04-06 2019-03-05 浙江大学 一种rgbd图像中物体形状的补全方法
TW201915592A (zh) * 2017-09-29 2019-04-16 揚明光學股份有限公司 三維列印系統及其製造方法
CN107945109B (zh) * 2017-11-06 2020-07-28 清华大学 基于卷积网络的图像拼接方法及装置
CN108734657B (zh) * 2018-04-26 2022-05-03 重庆邮电大学 一种具有视差处理能力的图像拼接方法
CN110111250B (zh) * 2019-04-11 2020-10-30 中国地质大学(武汉) 一种鲁棒的自动全景无人机图像拼接方法及装置
CN110097063A (zh) * 2019-04-30 2019-08-06 网易有道信息技术(北京)有限公司 电子设备的数据处理方法、介质、装置和计算设备
CN110197455B (zh) * 2019-06-03 2023-06-16 北京石油化工学院 二维全景图像的获取方法、装置、设备和存储介质
CN112199846A (zh) * 2020-10-14 2021-01-08 广东珞珈睡眠科技有限公司 基于三维人体重建技术分析和定制床垫的系统
CN114445274A (zh) * 2020-11-06 2022-05-06 中煤航测遥感集团有限公司 图像拼接方法、装置、电子设备及存储介质
CN113191946B (zh) * 2021-03-02 2022-12-27 中国人民解放军空军航空大学 航空三步进面阵图像拼接方法
CN114187366A (zh) * 2021-12-10 2022-03-15 北京有竹居网络技术有限公司 一种相机安装校正方法、装置、电子设备及存储介质
CN114463184B (zh) * 2022-04-11 2022-08-02 国仪量子(合肥)技术有限公司 图像拼接方法、装置及存储介质、电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293608A1 (en) * 2011-05-17 2012-11-22 Apple Inc. Positional Sensor-Assisted Perspective Correction for Panoramic Photography
CN103534726A (zh) * 2011-05-17 2014-01-22 苹果公司 用于全景摄影的位置传感器辅助的图像配准
CN103606189A (zh) * 2013-11-19 2014-02-26 浙江理工大学 一种面向非刚体三维重建的轨迹基选择方法
CN108648145A (zh) * 2018-04-28 2018-10-12 北京东软医疗设备有限公司 图像拼接方法及装置
CN110889819A (zh) * 2019-11-29 2020-03-17 上海联影医疗科技有限公司 医学图像扫描方法、装置、设备和存储介质
CN115115593A (zh) * 2022-06-28 2022-09-27 先临三维科技股份有限公司 一种扫描处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115115593B (zh) 2024-09-10
CN115115593A (zh) 2022-09-27

Similar Documents

Publication Publication Date Title
WO2024001959A1 (fr) Procédé et appareil de traitement de balayage, et dispositif électronique et support d'enregistrement
EP3872764A1 (fr) Procédé et appareil de construction de carte
US11367313B2 (en) Method and apparatus for recognizing body movement
WO2023213253A1 (fr) Procédé et appareil de traitement de données de balayage, dispositif électronique et support
WO2023213252A1 (fr) Procédé et appareil de traitement de données de balayage, dispositif et support
WO2023179310A1 (fr) Procédé et appareil de restauration d'image, dispositif, support et produit
WO2024109795A1 (fr) Procédé et appareil de traitement de balayage, dispositif et support
CN112907628A (zh) 视频目标追踪方法、装置、存储介质及电子设备
WO2023082922A1 (fr) Procédé et dispositif de positionnement d'objet dans une condition d'observation discontinue, et support de stockage
CN114399588A (zh) 三维车道线生成方法、装置、电子设备和计算机可读介质
WO2023237065A1 (fr) Procédé et appareil de détection de fermeture de boucle, dispositif électronique et support
CN116188583B (zh) 相机位姿信息生成方法、装置、设备和计算机可读介质
WO2020155908A1 (fr) Procédé et appareil de génération d'informations
WO2022194145A1 (fr) Procédé et appareil de détermination de position de photographie, dispositif et support
CN112880675B (zh) 用于视觉定位的位姿平滑方法、装置、终端和移动机器人
CN111383337B (zh) 用于识别对象的方法和装置
CN111368015B (zh) 用于压缩地图的方法和装置
US20240282139A1 (en) Method, apparatus and electronic device for image processing
WO2023226065A1 (fr) Procédé et appareil de traitement de données de positionnement d'ellipse, et dispositif et support lisible par ordinateur
CN118229171B (zh) 电力设备存放区域信息展示方法、装置与电子设备
US20240153110A1 (en) Target tracking method, apparatus, device and medium
CN114863025B (zh) 三维车道线生成方法、装置、电子设备和计算机可读介质
CN112883757B (zh) 生成跟踪姿态结果的方法
WO2023025181A1 (fr) Procédé et appareil de reconnaissance d'image, et dispositif électronique
WO2024036764A1 (fr) Procédé et appareil de traitement d'image, dispositif et support

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23830137

Country of ref document: EP

Kind code of ref document: A1