FI130306B - Method of determining cutting point of wood log - Google Patents

Method of determining cutting point of wood log Download PDF

Info

Publication number
FI130306B
FI130306B FI20225507A FI20225507A FI130306B FI 130306 B FI130306 B FI 130306B FI 20225507 A FI20225507 A FI 20225507A FI 20225507 A FI20225507 A FI 20225507A FI 130306 B FI130306 B FI 130306B
Authority
FI
Finland
Prior art keywords
wood log
log
wood
shape
detected
Prior art date
Application number
FI20225507A
Other languages
Finnish (fi)
Swedish (sv)
Other versions
FI20225507A1 (en
Inventor
Mazhar Mohsin
Pekka Toivanen
Keijo Haataja
Antti Väänänen
Juha Hiltunen
Mika Hiltunen
Noora Hiltunen
Eliisa Hiltunen
Jussi Virtanen
Original Assignee
Itae Suomen Yliopisto
KMK Vision Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Itae Suomen Yliopisto, KMK Vision Oy filed Critical Itae Suomen Yliopisto
Priority to FI20225507A priority Critical patent/FI130306B/en
Priority to PCT/FI2023/050313 priority patent/WO2023237812A1/en
Application granted granted Critical
Publication of FI130306B publication Critical patent/FI130306B/en
Publication of FI20225507A1 publication Critical patent/FI20225507A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Manufacture Of Wood Veneers (AREA)

Abstract

A method of determining a cutting point of a wood log. The method comprises acquiring a plurality of video frames, detecting the wood log from the video frames and generating masks for the video frames to indicate the wood log. A synthetic wood log shape is created from the masks and measurements of the wood log are estimated and any bends detected from the synthetic wood log shape. Finally, the cutting point of the wood log is determined based on the measurements and bends.

Description

METHOD OF DETERMINING CUTTING POINT OF WOOD LOG
FIELD OF THE INVENTION
The invention relates to inspecting and handling of wood logs or tree trunks in general. The invention specifically relates to a method for detecting an optimal cutting point of a wood log aiming for an increased yield of sawn wood products.
PRIOR ART
Modern harvesters are still today operated manually when forests are harvested. A user of the harvester has a great responsibility of quality of harvested wood. For example, logs with bends cannot be used in full length. And in general, length of a wood log is an important factor. In many countries, sawn wood products are sold in lengths of 30 cm increments from a minimum length of 4,00 metres allowing only a small tolerance from -2 cm to +4 cm. For example, wood logs that are cut to a length of 4,55 metres due to a measuring error, have to be cut to 4,30 metres and the remaining 25 cm is wasted. Worst case scenario happens in wood logs which are cut aiming for the minimum length of 4,00 metres. If the resulted wood log is 3,95 metres due a measuring error, the entire wood log is wasted.
Typically, the harvesters have a measuring wheel running along a tree trunk to measure the length of a wood log to be cut. Slipping of the measuring wheel occurs quite often which leads to measurement errors and ultimately to waste of wood material.
CN 108262809A discloses a plank processing method and a device based on artificial intelligence.
CN109711611A discloses a method for identifying outturn rate when a & 25 logis cut lengthwise into planks and other lumber products.
N There are methods for detecting shape and size of an object based on
O images or video frames of the object. However, commercial success reguires these
S methods to be faster and more accurate than a human operator and that has not = been the case so far. a s 30 BRIEF DESCRIPTION OF THE INVENTION 0 The object of the invention is a method which alleviates the drawbacks
S of the prior art. The method concerns determining a cutting point of a wood log to reduce waste of wood material. The method relies on video frames extracted from a video feed from which a wood log is detected. The detected wood log is measured checked for any bends. A cutting point is determined based on the measurements, bends and desired length of a wood log.
BRIEF DESCRIPTION OF THE FIGURES
The invention is now described in more detail in connection with preferred embodiments, with reference to the accompanying drawings, in which:
Fig. 1 shows a flowchart of a method of detecting a wood log;
Fig. 2 shows a flowchart of a method of determining a cutting point according to an embodiment;
Fig. 3 illustrates a topography of a system for implementing a method according to an embodiment;
Fig. 4 illustrates a topography of a detection module of a system according to an embodiment; and
Fig. 5 illustrates a topography of a segmentation module of a system according to an embodiment.
DETAILED DESCRIPTION OF THE INVENTION
Figure 2 shows a flow chart of a method of determining a cutting point of a wood log according to an embodiment. The method begins with a step 11 of acquiring a plurality of video frames from a video feed, preferably from a single video feed. Preferably, a Full-HD video feed is used, i.e. each frame comprising about 2 million pixels and the frame rate being at least 20 frames per second. The video frames are extracted and converted to single images. The video frames < extracted from the video feed are referred to as images throughout the present
S 25 disclosure. Video feed can be collected, for example, from a camera mounted on a 5 harvester head or somewhere else on a forestry vehicle. Preferably two cameras o are used for creating two video feeds showing the wood log in different angles and - the method of determining a cutting point of a wood log is performed for both video
E feeds. Preferably the two cameras are aimed at a wood log in 90* angle to each s 30 — other. In an embodiment, two video feeds can be used simultaneously as an input.
LO The method comprises a step 12 of detecting a wood log from said
O plurality of video frames. Preferably, a deep convolutional neural network is trained with a large-scale image dataset for feature extraction. Backbone networks extract rich features from the images which are then used as filters in detection modules for wood log detection. It should be noted that there are various methods for detecting a wood log from an image and one example of such a method is disclosed herein while referring to Figure 1.
Figure 1 shows a flowchart of a method of detecting a wood log according to an embodiment. The method begins with a step 1 of receiving video feed from a camera mounted on a harvester head. On the harvester head, the camera always has a clear view of sight to a wood log that is being processed with the harvester. Preferably two cameras are used for creating two video feeds showing the wood log in different angles and the method of detecting a wood log is performed for both video feeds. In an embodiment, two video feeds can be used simultaneously as an input.
In the next step 2, frames of the video feed are extracted and converted to single images. The video frames extracted from the video feed may be referred to as images throughout the present disclosure.
The method comprises a step 3 for extracting features from the single images by the backbone networks. A deep convolutional neural network is trained with a large-scale image dataset for feature extraction. The backbone networks extract rich features from the images which are then used as filters in detection modules for wood log detection. The images can be pre-processed for training purposes and to make the system more accurate, data augmentation can be used to make more training data. In an embodiment, the backbone network comprises preferably 10-30 layers of convolution operations and more preferably 16-22 layers of convolution operations.
The method further comprises a step 4 of using extracted features as filters for subsequent images. The deep convolutional neural network comprises
NN preferably 200 to 2000 nodes, more preferably 400 to 1500 nodes and even more
N preferably 600 to 1000 nodes. In an embodiment, the deep convolutional neural = network comprises preferably 20 to 40 layers of convolution operations and more ? preferably 23 to 30 layers of convolution operations. In an embodiment, a region
N 30 of interest pooling of the deep convolutional neural network uses one to six
E convolutional layers.
S The method also comprises a step 5 of defining probabilities ofa recognized
O object being a wood log for regions of interest based on the extracted features. In
N an embodiment, the step of defining probabilities, a region of interest proposal
N 35 network comprises preferably 2 to 6 layers of convolution operations. In an embodiment, final classification within the deep convolutional neural network is performed using fully connected layers in the step 5 of defining probabilities.
The method further comprises a step 6 of creating a bounding box around an object in a region of interest with the highest probability.
In an embodiment, the deep convolutional neural network (DCNN) calculates 1x1 pointwise convolution operations for all three color channels simultaneously. Preferably, said 1x1 pointwise convolution operations are applied to combine the outputs from a depthwise operation. The deep convolutional neural network of the present disclosure preferably uses stride 1 and Relu function but also other strides configurations and other activation functions can be used.
The method of determining the cutting point of the wood log further comprises a step 13 of generating a plurality of masks for said plurality of video frames. Each of the plurality of masks indicate pixels representing a wood log in an individual video frame. In other words, the masks are binary, so a single pixel either forms part of the mask and part of the wood log or it doesn’t. Preferably, the plurality of masks comprises one or more mask patches of the detected wood log.
Use of the mask patches makes estimation of the shape of the wood log more precise. In an embodiment, the input video frames are processed in real-time to generate binary masks for each frame such that pixels belonging to an object exhibiting motion are labelled as a wood log. In an embodiment, the method may further comprise a step of predicting a mask for a frame based on a mask of the previous frame.
The method also comprises a step 14 of creating a synthetic wood log shape which represents shape of the wood log and is based on the plurality of masks created in step 13 of generating a plurality of masks. Preferably, the n synthetic wood log shape is very close to the actual shape of the wood log. In an
O embodiment, the step of creating the synthetic wood log shape comprises ~ assembling of the mask patches to create the synthetic wood log shape. = The method further comprises a step 15 of estimating measurements of
N 30 the wood log based on the synthetic wood log shape. In an embodiment, the
E measurements can be estimated by calculating a distance from pixels belonging to
N the wood log in the first frame, when the wood log is detected by the detection 3 module and do the distance calculation in the subsequent frames from the prior
N pixels. Kalman Filtering motion estimation and trajectories calculation can be used.
N 35 Pixel distances can be converted into real world coordinate distance assuming a proper camera calibration has been done.
In an embodiment, the method further comprises a step of drawing a contour around the detected log. The contour can be used to indicate a wood log of interest if the images contain more than one wood log. The contour can also be displayed to a user for visual verification of proper wood log detection. 5 The method of determining the cutting point of the wood log further comprises a step 16 of detecting a bend of the synthetic wood log shape. The step 16 preferably comprises detecting pixels in the plurality of video frames deviating from a rectangular shape of the detected wood log. In general terms, bends can be detected as being deviations from a rectangular shape of the detected wood log.
Finally, the method comprises a step 17 of determining the cutting point of the wood log. The cutting point is determined based on the estimated measurements, detected bend - or lack thereof - and a desired log-wood length.
The desired log-wood length can be user defined or alternatively, one or more predefined values for log-wood length can be set. In an embodiment, the step 17 of determining the cutting point comprises fitting a rectangle inside the detected wood log, where the length of the rectangle determines the length of a cut wood log.
Figures 3-5 will be discussed in the following as presenting a system suitable for implementing a claimed method. There are many ways to implement — the method using, for example, convoluted neural networks (CNN) and many more will become available as the technology advances. The embodiments of the methods of the present disclosure are not limited to a specific system or architecture. The following system description is included for the sake of teaching a person skilled in the art to implement the disclosed embodiments of the method.
Figure 3 illustrates a topography of a system for implementing a method n according to an embodiment.
AN The system reguires an input 20 of data in a form of video frames. Video = feed is collected from a camera mounted e.g. on a harvester head. The frames are ? extracted and converted to single images. The images can then be pre-processed & 30 for training purposes. To make the model more accurate data augmentation can be
E used to make more training data. 5 Ground truth data for training purposes is generated by using a data 2 annotation tool to manually annotate two different classes: wood log and
N background. JSON data containing bounding box information of the wood log (x,y
N 35 coordinates, width and height) and mask image is generated which contains black and white segmented area of the wood log.
The backbone network 21 is a deep convolutional neural network is trained with a large-scale image dataset for feature extraction. The backbone networks extract rich features from the images which are then used as filters in detection modules for wood log detection.
For real-time detection and reducing the computation on convolution operations, the network is designed in a way that divides the standard convolution into depthwise and 1x1 pointwise convolution. The input and the filters are split into different channels. The filters are applied on each input channel separately.
For each channel, the corresponding filter is used in the convolution operation.
Then a 1 x 1 pointwise convolution is applied to combine the outputs from the depthwise operation. These operations reduce the model size, its parameters and computation time.
The network architecture, configuration and training details are included in the following paragraphs as the backbone network is constructed and trained along with the detection module.
A detection module 22 is responsible for detection of wood log in real- time. Details of an embodiment of the detection module are shown in Figure 4.
Video frames are provided as an input to the module and it processes it in real-time to predict the bounding boxes for the detected wood log. The detection module — consists of a deep CNN model which is trained and fine-tuned on wood log images.
Detection module is fine-tuned using the wood log dataset. The detection module consists of regional convolutional neural networks that have
Region of Interest (ROI) extractor, Region Proposal Network 32 and Bounding Box (BBOX) regressor. The network extract regions proposals, containing potential wood log in the image. An input frame or image is given as input to the network n and its features extracted as feature maps 31 via pre-trained CNN (Backbone
AN network 21). The features are sent to two different components of the network = architecture. The RPN 32 is used to determine wood log location. The Region of ? Interest (ROI) Pooling module 33 proposes the bounding box ROls based on & 30 extracted features from the previous step. The extracted features and ROIs are then
E passed into two fully-connected layers to obtain class label 34 and bounding box
N coordinates 35 for the final localization of the wood log. These are the outputs 36 3 from the detection module.
N Network training consists of several steps:
N 35 1. The labeled wood log images or ground truth wood log image data are split into multiple sets e.g training, validation and testing split.
2. The deep convolutional neural network is constructed with conv2D,
BatchNormalization, and other layers, according to requirements of the application. 3. The parameters such as training set images, number of epochs, batch size, validation set images, optimizer, learning rate and loss functions are given to the training utility program. Some of these parameters are adjusted after a couple of test runs and optimized for better results. 4. An inference model is built and stored. 5. After successful training, the wood log test image dataset is used to test the accuracy of the trained model for wood log detection. 6. An output image with bounding box and a class label with probability score is displayed for successful tests. 7. Training requires a large computation resource, for this purpose a powerful computer having multiple GPUs is used.
The network consists of a Backbone network 21 and Detection module 22. The network consists of 26 convolutions (CONV2D) layers and Inverted residual layers. The convolution layers are followed by batch normalization and have RELU function. Some of the layers have a hardswish function which is similar to sigmoid function but with little modification for optimization purposes. These layers are followed by a Feature pyramid network which has the Max pooling layer.
The Detector part consists of a regional proposal network which has 3 conv2D layers. It takes the features from the preceding layers to propose regions potentially containing wood log. The final fully connected layers (34, 35) propose the bounding box and class for the detected wood log. n The detection module 22 takes the following inherence steps:
O 1. Input video frames. ~ 2. Extract regions of the image that potentially contains wood log. = 3. Use the extracted features from the backbone network to compute
N 30 features for each proposed region.
E 4. Classify each proposed region. 5 5. Generate the (x;y)-coordinates for the proposed wood log in the
LO frame.
N 6. Assign a class label.
N 35 7. Assign probability score for each label.
In a segmentation module 23, input video frames are processed in real- time to generate binary masks for each frame such that pixels belonging to an object exhibiting motion are labeled as wood log. A segmentation module 23 according to an embodiment is shown in more detail in Figure 5.
The segmentation module is responsible for mask generation, patches of wood log and final wood shape. The segmentation module uses the same pre- trained backbone network for feature extraction. The mask generation branch is added on top of the pre-trained backbone network. The predicted patches of wood log masks are used in the shape estimation module for final shape generation of the wood log.
The segmentation module uses the same backbone network for feature extraction. Additionally, a segmentation head is attached to the network. This module uses dilated convolution, meaning a filter receptive field can be increased without increasing computational cost thus making it suitable for real-time applications. The segmentation module uses Reduced Atrous Spatial Pyramid
Pooling 41 which extracts convolutional features at multiple scales and segment objects of interest at multiple scales (wood log). It uses 1x1 convolution operations 42 on the last layer of the backbone network to extract rich features. Furthermore, it uses adaptive average pooling to capture globe features. Multiple layers are concatenated 43 into one or two layers and finally 3x3 convolution operations 44 are used for creating the predicted mask for segmentation module output 45.
The segmentation module does the following inference steps: 1. Input video frames. 2. Use the extracted features from the backbone network to detect and segment log n 3. Start tracking the detected log from the first frame.
AN 4. Compare the visual information in the subseguent frames to the = detected log in the first frame. ? 5. Scoring function to compute probability to match the information in
N 30 the first frame.
E 6. Draw sharp contours around the detected log.
N 7. Predict mask for each frame and a complete mask in the final frame. 3 8. Store the black and white masks for the segmented wood log.
N
S
N 35 A measurement module 24 is responsible for measurement generation for log length estimation, cutting point identification. The predicted cutting point location is based on two scenarios e.g. 1: User can provide the length for the optimal cut location or 2: the cut location is predicted automatically by the system based on any detected defect such as bend in wood log. The bends are identified in the segmentation module 23 and further confirmation is done by a shape estimation module 25. Preferably, the cutting point is determined in order of preference of a combination of wood log length and diameter (width). Various combinations of length and diameter can be input with an order of preference and the cutting point is determined by using the most preferred combination of length and diameter that can be realized with the wood log.
The measurement module 24 generates measurements by calculating the distance from pixels belonging to the wood log in the first frame, when the wood log is detected by the detection module and does the distance calculation in the subsequent frames from the prior pixels. This uses a technique called tracking by detection method. The module uses pre-trained DCNN from the detection module. The output from the detection module is taken as an input to this module.
Pixels of the wood log from the first frame are assumed to move in forward or backward direction. This module uses Kalman Filtering motion estimation and trajectories calculation. It converts pixel distances to real world coordinate distance assuming a proper camera calibration is done. After a desired distance is reached a cut mark is assigned to that location.
The shape estimation module 25 is responsible for the shape estimation of wood log for visual representation that appears on a user interface 26 panel for further user action and also for the confirmation for any bend identification. The patches of wood log detected and segmented by detection module 22 and segmentation module 23 respectively are collected from individual frames and n assembled to create a synthetic wood log shape which is very close to the actual
AN shape of the wood log. The outputis shown to the user (to UI panel) for assessment = and further actions. The output contains visual clues for bend identification from = the wood log, which enables the user to mark possible cut locations.
N 30 The shape estimation module 25 consists of utility programs that take
E input from the segmentation module 23 to calculate the final shape and identify 5 bends (deviating pixels from the rectangle areas). The user can input a rectangle 2 for desired length and width of wood log and the bends are identified within that
N range.
N 35 The shape estimation module performs the following inference steps: 1. Input mask patches and rectangle width and height.
2. Assemble mask patches. 3. Compare to final mask. 4. Draw rectangle and identify bend.
The output application / user interface module 26 has a display for presenting a real-time video feed. The video feed includes detection module 22 output containing contours drawn around detected wood log. The module 26 includes a panel which shows the log measurements 27: estimated shape, rectangle output and length overlaid on estimated shape. An optimal wood log cutting point 28 from the measurement module 24 is displayed on the real-time video feed.
It is obvious to the skilled person in the art that, as technology develops, the basic idea of the invention can be implemented in various ways. The invention and its embodiments are therefore not limited to only the examples presented above, rather they may vary within the scope of the claims. 0
N
O
N
>
O
N
I a a
N
O
LO
LO
N
N
O
N

Claims (9)

1. A method of determining a cutting point of a wood log, wherein the method comprises steps of: acquiring a plurality of video frames from a video feed, detecting the wood log from said plurality of video frames, generating a plurality of masks for said plurality of video frames, each of the plurality of masks indicating pixels representing said wood log in an individual video frame, creating a synthetic wood log shape representing shape of the wood log and based on said plurality of masks, estimating measurements of the wood log based on the synthetic wood log shape, detecting a bend of the synthetic wood log shape, and determining the cutting point of the wood log based on the estimated measurements, detected bend and a desired log-wood length.
2. A method according to claim 1, wherein said plurality of masks comprise one or more mask patches of the detected wood log.
3. A method according to claim 2, wherein the step of creating the synthetic wood log shape comprises assembling of the mask patches to create the synthetic wood log shape.
4. A method according to any one of claims 1 to 3, wherein the method N further comprises a step of drawing a contour around the detected log. O N
O 5. Amethod according to any one of claims 1 to 4, wherein the method 2 further comprises a step of predicting a mask for a frame based on a mask I 30 of the previous frame. a
S 6. A method according to any one of claims 1 to 5, wherein the step of a detecting a bend comprises detecting pixels in the plurality of video frames S deviating from a rectangular shape of the detected wood log.
7. A method according to any one of claims 1 to 6, wherein the step of determining the cutting point comprises fitting a rectangle inside the detected wood log, where a combination of wood log length and diameter has the highest preference according to a predetermined order of preference, where length of the rectangle indicates the length of a cut wood log and width of the rectangle indicates the diameter of the cut wood log.
8. A method according to any one of claims 1 to 7, wherein said plurality of video frames is acquired from a single video feed.
9. A method according to any one of claims 1 to 8, wherein the step of detecting a wood log is performed using a deep convolutional neural network trained with a large-scale image dataset for feature extraction, where backbone networks extract rich features from said image dataset for using said rich features as filters for wood log detection, wherein the method comprising steps of: extracting features (3) from said single images by the backbone networks, using said extracted features (4) as filters for subsequent images, defining probabilities of a recognized object being a wood log for regions of interest (5) based on said extracted features, and creating a bounding box (6) around the recognized object in a region of interest with the highest probability. N N O N © I O O I a a N O LO LO N N O N
FI20225507A 2022-06-09 2022-06-09 Method of determining cutting point of wood log FI130306B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FI20225507A FI130306B (en) 2022-06-09 2022-06-09 Method of determining cutting point of wood log
PCT/FI2023/050313 WO2023237812A1 (en) 2022-06-09 2023-06-01 Method of determining cutting point of wood log

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20225507A FI130306B (en) 2022-06-09 2022-06-09 Method of determining cutting point of wood log

Publications (2)

Publication Number Publication Date
FI130306B true FI130306B (en) 2023-06-12
FI20225507A1 FI20225507A1 (en) 2023-06-12

Family

ID=86658355

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20225507A FI130306B (en) 2022-06-09 2022-06-09 Method of determining cutting point of wood log

Country Status (1)

Country Link
FI (1) FI130306B (en)

Also Published As

Publication number Publication date
FI20225507A1 (en) 2023-06-12

Similar Documents

Publication Publication Date Title
CN105678689B (en) High-precision map data registration relation determining method and device
CN113705478B (en) Mangrove single wood target detection method based on improved YOLOv5
CN111814741B (en) Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism
CN111882579A (en) Large infusion foreign matter detection method, system, medium and equipment based on deep learning and target tracking
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN110738101A (en) Behavior recognition method and device and computer readable storage medium
CN110197106A (en) Object designation system and method
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN110765865B (en) Underwater target detection method based on improved YOLO algorithm
US20220101628A1 (en) Object detection and recognition device, method, and program
CN114677554A (en) Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort
US8094971B2 (en) Method and system for automatically determining the orientation of a digital image
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN116977960A (en) Rice seedling row detection method based on example segmentation
CN112132884B (en) Sea cucumber length measurement method and system based on parallel laser and semantic segmentation
FI130306B (en) Method of determining cutting point of wood log
FI130303B (en) Method of detecting and segmenting wood log
CN111738264A (en) Intelligent acquisition method for data of display panel of machine room equipment
WO2023237812A1 (en) Method of determining cutting point of wood log
CN113627255B (en) Method, device and equipment for quantitatively analyzing mouse behaviors and readable storage medium
CN115995017A (en) Fruit identification and positioning method, device and medium
CN115049600A (en) Intelligent identification system and method for small sample pipeline defects
CN115115954A (en) Intelligent identification method for pine nematode disease area color-changing standing trees based on unmanned aerial vehicle remote sensing
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment