deepforest package

Submodules

deepforest.deepforest module

Deepforest main module.

This module holds the deepforest class for model building and training

class deepforest.deepforest.deepforest(weights=None, saved_model=None)[source]

Bases: object

Class for training and predicting tree crowns in RGB images.

Parameters:
  • weights (str) – Path to model saved on disk from keras.model.save_weights(). A new model is created and weights are copied. Default is None.
  • saved_model – Path to a saved model from disk using keras.model.save(). No new model is created.
model

A keras training model from keras-retinanet

evaluate_generator(annotations, comet_experiment=None, iou_threshold=0.5, max_detections=200)[source]

Evaluate prediction model using a csv fit_generator.

Parameters:
  • annotations (str) – Path to csv label file, labels are in the format -> path/to/image.png,x1,y1,x2,y2,class_name
  • comet_experiment (object) – A comet experiment class objects to track
  • iou_threshold (float) – IoU Threshold to count for a positive detection (defaults to 0.5)
  • max_detections (int) – Maximum number of bounding box predictions
Returns:

Mean average precision of the evaluated data

Return type:

mAP

plot_curves()[source]

Plot training curves.

predict_generator(annotations, comet_experiment=None, iou_threshold=0.5, max_detections=200, return_plot=False, color=None)[source]

Predict bounding boxes for a model using a csv fit_generator

Parameters:
  • annotations (str) – Path to csv label file, labels are in the format -> path/to/image.png,x1,y1,x2,y2,class_name
  • comet_experiment (object) – A comet experiment class objects to track
  • color – rgb color for the box annotations if return_plot is True e.g. (255,140,0) is orange.
  • return_plot – Whether to return prediction boxes (False) or Images (True). If True, files will be written to current working directory if model.config[“save_path”] is not defined.
Returns:

If return_plot=False, a pandas dataframe of bounding boxes

for each image in the annotations file None: If return_plot is True, images are written to save_dir as a side effect.

Return type:

boxes_output

predict_image(image_path=None, numpy_image=None, return_plot=True, score_threshold=0.05, show=False, color=None)[source]

Predict tree crowns based on loaded (or trained) model.

Parameters:
  • image_path (str) – Path to image on disk
  • numpy_image (array) – Numpy image array in BGR channel order following openCV convention
  • return_plot – Whether to return image with annotations overlaid, or just a numpy array of boxes
  • score_threshold – score threshold default 0.05,
  • show (bool) – Plot the predicted image with bounding boxes. Ignored if return_plot=False
  • color (tuple) – Color of bounding boxes in BGR order (0,0,0) black default
Returns:

if return_plot, an image. Otherwise a numpy array

of predicted bounding boxes, with scores and labels

Return type:

predictions (array)

predict_tile(raster_path=None, numpy_image=None, patch_size=400, patch_overlap=0.05, iou_threshold=0.15, return_plot=False)[source]

For images too large to input into the model, predict_tile cuts the image into overlapping windows, predicts trees on each window and reassambles into a single array.

Parameters:
  • raster_path – Path to image on disk
  • numpy_image (array) – Numpy image array in BGR channel order following openCV convention
  • patch_size – patch size default400,
  • patch_overlap – patch overlap default 0.15,
  • iou_threshold – Minimum iou overlap among predictions between windows to be suppressed. Defaults to 0.5. Lower values suppress more boxes at edges.
  • return_plot – Should the image be returned with the predictions drawn?
Returns:

if return_plot, an image.

Otherwise a numpy array of predicted bounding boxes, scores and labels

Return type:

boxes (array)

read_classes()[source]

Read class file in case of multi-class training.

If no file has been created, DeepForest assume there is 1 class, Tree

train(annotations, input_type='fit_generator', list_of_tfrecords=None, comet_experiment=None, images_per_epoch=None)[source]

Train a deep learning tree detection model using keras-retinanet. This is the main entry point for training a new model based on either existing weights or scratch.

Parameters:
  • annotations (str) – Path to csv label file, labels are in the format -> path/to/image.png,x1,y1,x2,y2,class_name
  • input_type – “fit_generator” or “tfrecord”
  • list_of_tfrecords – Ignored if input_type != “tfrecord”, list of tf records to process
  • comet_experiment – A comet ml object to log images. Optional.
  • images_per_epoch – number of images to override default config of images in annotations file / batch size. Useful for debug
Returns:

A trained keras model prediction model: with bbox nms

trained model: without nms

Return type:

model (object)

use_release(gpus=1)[source]

Use the latest DeepForest model release from github and load model. Optionally download if release doesn’t exist.

Returns:A trained keras model gpus: number of gpus to parallelize, default to 1
Return type:model (object)

deepforest.predict module

Prediction module.

This module consists of predict utility function for the deepforest class

deepforest.predict.non_max_suppression(sess, boxes, scores, labels, max_output_size=200, iou_threshold=0.15)[source]

Provide a tensorflow session and get non-maximum suppression.

Parameters:
  • sess – a tensorflow session
  • boxes – boxes
  • scores – scores
  • labels – labels
  • max_output_size – passed to tf.image.non_max_suppression
  • iou_threshold – passed to tf.image.non_max_suppression

Returns:

deepforest.predict.predict_image(model, image_path=None, raw_image=None, score_threshold=0.05, max_detections=200, return_plot=True, classes={'0': 'Tree'}, color=None)[source]

Predict invidiual tree crown bounding boxes for a single image.

Parameters:
  • model (object) – A keras-retinanet model to predict bounding boxes, either load a model from weights, use the latest release, or train a new model from scratch.
  • image_path (str) – Path to image file on disk
  • raw_image (str) – Numpy image array in BGR channel order following openCV convention
  • score_threshold (float) – Minimum probability score to be included in final boxes, ranging from 0 to 1.
  • max_detections (int) – Maximum number of bounding box predictions per tile
  • return_plot (bool) – If true, return a image object, else return bounding boxes as a numpy array
  • classes – classes default 0 to Tree
  • color – color default none
Returns:

If return_plot is TRUE, the image with the overlaid

boxes is returned

image_detections: If return_plot is FALSE, a np.array of image_boxes,

image_scores, image_labels

Return type:

raw_image (array)

deepforest.preprocess module

The preprocessing module is used to reshape data into format suitable for training or prediction.

For example cutting large tiles into smaller images.

deepforest.preprocess.compute_windows(numpy_image, patch_size, patch_overlap)[source]

Create a sliding window object from a raster tile.

Parameters:numpy_image (array) – Raster object as numpy array to cut into crops
Returns:a sliding windows object
Return type:windows (list)
deepforest.preprocess.image_name_from_path(image_path)[source]

Convert path to image name for use in indexing.

deepforest.preprocess.save_crop(base_dir, image_name, index, crop)[source]

Save window crop as image file to be read by PIL.

Filename should match the image_name + window index

deepforest.preprocess.select_annotations(annotations, windows, index, allow_empty=False)[source]

Select annotations that overlap with selected image crop.

Parameters:
  • image_name (str) – Name of the image in the annotations file to lookup.
  • annotations_file – path to annotations file in the format -> image_path, xmin, ymin, xmax, ymax, label
  • windows – A sliding window object (see compute_windows)
  • index – The index in the windows object to use a crop bounds
  • allow_empty (bool) – If True, allow window crops that have no annotations to be included
Returns:

a pandas dataframe of annotations

Return type:

selected_annotations

deepforest.preprocess.split_raster(path_to_raster, annotations_file, base_dir='.', patch_size=400, patch_overlap=0.05, allow_empty=False)[source]

Divide a large tile into smaller arrays. Each crop will be saved to file.

Parameters:
  • path_to_raster – (str): Path to a tile that can be read by rasterio on disk
  • annotations_file (str) – Path to annotations file (with column names) data in the format -> image_path, xmin, ymin, xmax, ymax, label
  • base_dir (str) – Where to save the annotations and image crops relative to current working dir
  • patch_size (int) – Maximum dimensions of square window
  • patch_overlap (float) – Percent of overlap among windows 0->1
  • allow_empty – If True, include images with no annotations to be included in the dataset
Returns:

A pandas dataframe with annotations file for training.

deepforest.retinanet_train module

Retinanet training module.

Developed from keras-retinanet repo

deepforest.retinanet_train.check_args(parsed_args)[source]

Function to check for inherent contradictions within parsed arguments. For example, batch_size < num_gpus Intended to raise errors prior to backend initialisation.

Parameters:parsed_args – parser.parse_args()
Returns:parsed_args
deepforest.retinanet_train.create_callbacks(model, training_model, prediction_model, validation_generator, args, comet_experiment)[source]

Creates the callbacks to use during training.

Args
model: The base model. training_model: The model that is used for training. prediction_model: The model that should be used for validation. validation_generator: The generator for creating validation data. args: parseargs args object. comet_experiment: cometml object to log images
Returns:A list of callbacks used for training.
deepforest.retinanet_train.create_generators(args, preprocess_image)[source]

Create generators for training and validation.

Parameters:
  • args – parseargs object containing configuration for generators.
  • preprocess_image – Function that preprocesses an image for the network.
deepforest.retinanet_train.create_models(backbone_retinanet, num_classes, weights, multi_gpu=0, freeze_backbone=False, lr=1e-05, config=None, targets=None, freeze_layers=0, modifier=None)[source]

Creates three models (model, training_model, prediction_model).

Parameters:
  • backbone_retinanet – A function to call to create a retinanet model with a given backbone.
  • num_classes – The number of classes to train.
  • weights – The weights to load into the model.
  • multi_gpu – The number of GPUs to use for training.
  • freeze_backbone – If True, disables learning for the backbone.
  • config – Config parameters, None indicates the default configuration.
  • targets – Target tensors if training a model with tfrecord inputs
  • freeze_layers – int layer number to freeze from bottom of the retinanet network during finetuning. e.g. 10 will set layers 0:10 to layer.trainable = False. 0 is default, no freezing.
  • modifier – function that takes in a model and freezes resnet layers, returns modified object
Returns:

The base model.

This is also the model that is saved in snapshots.

training_model : The training model.

If multi_gpu=0, this is identical to model.

prediction_model : The model wrapped with utility functions to perform

object detection (applies regression values and performs NMS).

Return type:

model

deepforest.retinanet_train.main(forest_object, args=None, input_type='fit_generator', list_of_tfrecords=None, comet_experiment=None)[source]

Main Training Loop :param forest_object: a deepforest class object :param args: Keras retinanet argparse :param list_of_tfrecords: list of tfrecords to parse :param input_type: “fit_generator” or “tfrecord” input type

deepforest.retinanet_train.makedirs(path)[source]
deepforest.retinanet_train.model_with_weights(model, weights, skip_mismatch)[source]

Load weights for model.

Parameters:
  • model – The model to load weights for.
  • weights – The weights to load.
  • skip_mismatch – If True, skips layers whose shape of weights doesn’t match with the model.
deepforest.retinanet_train.parse_args(args)[source]

Parse the arguments.

deepforest.tfrecords module

Tfrecord module Tfrecords creation and reader for improved performance across multi-gpu There were a tradeoffs made in this repo.

It would be natural to save the generated prepreprocessed image to tfrecord from the generator. This results in enormous (100x) files. The compromise was to read the original image from file using tensorflow’s data pipeline. The opencv resize billinear method is marginally different then the tensorflow method, so we can’t literally assert they are the same array.

deepforest.tfrecords.create_dataset(filepath, batch_size=1, shuffle=True, repeat=True)[source]
Parameters:
  • filepath – list of tfrecord files
  • batch_size – number of images per batch
  • shuffle – shuffle order or images
  • repeat – repeat the dataset forever
Returns:

a tensorflow dataset object for model training or prediction

Return type:

dataset

deepforest.tfrecords.create_tensors(list_of_tfrecords, backbone_name='resnet50', shuffle=True, repeat=True)[source]

Create a wired tensor target from a list of tfrecords

Parameters:
  • list_of_tfrecords – a list of tfrecord on disk to turn into a tfdataset
  • backbone_name – keras retinanet backbone
  • repeat – repeat images forever
  • shuffle – shuffle image order
Returns:

input tensors of images targets: target tensors of bounding boxes and classes

Return type:

inputs

deepforest.tfrecords.create_tf_example(image, regression_target, class_target, fname, original_image)[source]
deepforest.tfrecords.create_tfrecords(annotations_file, class_file, backbone_model='resnet50', image_min_side=800, size=1, savedir='./')[source]
Parameters:
  • annotations_file – path to 6 column data in form image_path, xmin, ymin, xmax, ymax, label
  • backbone_model – A keras retinanet backbone
  • image_min_side – resized image object minimum size
  • size – Number of images per tfrecord
  • savedir – dir path to save tfrecords files
Returns:

A list of path names of written tfrecords

Return type:

written_files

deepforest.utilities module

class deepforest.utilities.DownloadProgressBar(iterable=None, desc=None, total=None, leave=True, file=None, ncols=None, mininterval=0.1, maxinterval=10.0, miniters=None, ascii=None, disable=False, unit='it', unit_scale=False, dynamic_ncols=False, smoothing=0.3, bar_format=None, initial=0, position=None, postfix=None, unit_divisor=1000, write_bytes=None, lock_args=None, nrows=None, gui=False, **kwargs)[source]

Bases: tqdm.std.tqdm

Download progress bar class.

Parameters:
  • iterable (iterable, optional) – Iterable to decorate with a progressbar. Leave blank to manually manage the updates.
  • desc (str, optional) – Prefix for the progressbar.
  • total (int or float, optional) – The number of expected iterations. If unspecified, len(iterable) is used if possible. If float(“inf”) or as a last resort, only basic progress statistics are displayed (no ETA, no progressbar). If gui is True and this parameter needs subsequent updating, specify an initial arbitrary large positive number, e.g. 9e9.
  • leave (bool, optional) – If [default: True], keeps all traces of the progressbar upon termination of iteration. If None, will leave only if position is 0.
  • file (io.TextIOWrapper or io.StringIO, optional) – Specifies where to output the progress messages (default: sys.stderr). Uses file.write(str) and file.flush() methods. For encoding, see write_bytes.
  • ncols (int, optional) – The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound. If unspecified, attempts to use environment width. The fallback is a meter width of 10 and no limit for the counter and statistics. If 0, will not print any meter (only stats).
  • mininterval (float, optional) – Minimum progress display update interval [default: 0.1] seconds.
  • maxinterval (float, optional) – Maximum progress display update interval [default: 10] seconds. Automatically adjusts miniters to correspond to mininterval after long display update lag. Only works if dynamic_miniters or monitor thread is enabled.
  • miniters (int or float, optional) – Minimum progress display update interval, in iterations. If 0 and dynamic_miniters, will automatically adjust to equal mininterval (more CPU efficient, good for tight loops). If > 0, will skip display of specified number of iterations. Tweak this and mininterval to get very efficient loops. If your progress is erratic with both fast and slow iterations (network, skipping items, etc) you should set miniters=1.
  • ascii (bool or str, optional) – If unspecified or False, use unicode (smooth blocks) to fill the meter. The fallback is to use ASCII characters ” 123456789#”.
  • disable (bool, optional) – Whether to disable the entire progressbar wrapper [default: False]. If set to None, disable on non-TTY.
  • unit (str, optional) – String that will be used to define the unit of each iteration [default: it].
  • unit_scale (bool or int or float, optional) – If 1 or True, the number of iterations will be reduced/scaled automatically and a metric prefix following the International System of Units standard will be added (kilo, mega, etc.) [default: False]. If any other non-zero number, will scale total and n.
  • dynamic_ncols (bool, optional) – If set, constantly alters ncols and nrows to the environment (allowing for window resizes) [default: False].
  • smoothing (float, optional) – Exponential moving average smoothing factor for speed estimates (ignored in GUI mode). Ranges from 0 (average speed) to 1 (current/instantaneous speed) [default: 0.3].
  • bar_format (str, optional) –

    Specify a custom bar string formatting. May impact performance. [default: ‘{l_bar}{bar}{r_bar}’], where l_bar=’{desc}: {percentage:3.0f}%|’ and r_bar=’| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ‘

    ’{rate_fmt}{postfix}]’
    Possible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt,
    percentage, elapsed, elapsed_s, ncols, nrows, desc, unit, rate, rate_fmt, rate_noinv, rate_noinv_fmt, rate_inv, rate_inv_fmt, postfix, unit_divisor, remaining, remaining_s.

    Note that a trailing “: ” is automatically removed after {desc} if the latter is empty.

  • initial (int or float, optional) – The initial counter value. Useful when restarting a progress bar [default: 0]. If using float, consider specifying {n:.3f} or similar in bar_format, or specifying unit_scale.
  • position (int, optional) – Specify the line offset to print this bar (starting from 0) Automatic if unspecified. Useful to manage multiple bars at once (eg, from threads).
  • postfix (dict or *, optional) – Specify additional stats to display at the end of the bar. Calls set_postfix(**postfix) if possible (dict).
  • unit_divisor (float, optional) – [default: 1000], ignored unless unit_scale is True.
  • write_bytes (bool, optional) – If (default: None) and file is unspecified, bytes will be written in Python 2. If True will also write bytes. In all other cases will default to unicode.
  • lock_args (tuple, optional) – Passed to refresh for intermediate output (initialisation, iterating, and updating).
  • nrows (int, optional) – The screen height. If specified, hides nested bars outside this bound. If unspecified, attempts to use environment height. The fallback is 20.
  • gui (bool, optional) – WARNING: internal parameter - do not use. Use tqdm.gui.tqdm(…) instead. If set, will attempt to use matplotlib animations for a graphical output [default: False].
Returns:

out

Return type:

decorated iterator.

update_to(b=1, bsize=1, tsize=None)[source]
deepforest.utilities.create_classes(annotations_file)[source]

Create a class list in the format accepted by keras retinanet.

Parameters:annotations_file – an annotation csv in the retinanet format path/to/image.png,x1,y1,x2,y2,class_name
Returns:path to classes file
deepforest.utilities.format_args(annotations_file, classes_file, config, images_per_epoch=None)[source]

Format config file to match argparse list for retinainet.

Parameters:
  • annotations_file – a path to a csv
  • classes_file – dataframe of annotations to get number of images, no header
  • config (dict) – a dictionary object to convert into a list for argparse
  • images_per_epoch (int) – Override default steps per epoch (n images/batch size) by manually setting a number of images
Returns:

a list structure that mimics

argparse input arguments for retinanet

Return type:

arg_list (list)

deepforest.utilities.label_to_name(class_dict, label)[source]

Map label to name.

deepforest.utilities.number_of_images(annotations_file)[source]

How many images in the annotations file?

Parameters:annotations_file (str) –
Returns:Number of images
Return type:n (int)
deepforest.utilities.read_config(config_path)[source]
deepforest.utilities.read_model(model_path, config)[source]

Read keras retinanet model from keras.model.save()

deepforest.utilities.round_with_floats(x)[source]

Check if string x is float or int, return int, rounded if needed.

deepforest.utilities.use_release(save_dir='/home/docs/checkouts/readthedocs.org/user_builds/deepforest/envs/latest/lib/python3.7/site-packages/deepforest-0.3.2-py3.7-linux-x86_64.egg/deepforest/data/', prebuilt_model='NEON')[source]

Check the existence of, or download the latest model release from github.

Parameters:
  • save_dir – Directory to save filepath, default to “data” in deepforest repo
  • prebuilt_model – Currently only accepts “NEON”, but could be expanded to include other prebuilt models. The local model will be called {prebuilt_model}.h5 on disk.
Returns:

path to downloaded model

Return type:

release_tag, output_path (str)

deepforest.utilities.xml_to_annotations(xml_path)[source]

Load annotations from xml format (e.g. RectLabel editor) a nd convert them into retinanet annotations format.

Parameters:xml_path (str) – Path to the annotations xml, formatted by RectLabel
Returns:
in the
format -> path/to/image.png,x1,y1,x2,y2,class_name
Return type:Annotations (pandas dataframe)

Module contents

Top-level package for DeepForest.

deepforest.get_data(path)[source]