Dlib is principally a C++ library, however, you can use a number of its tools from python applications. This page documents the python API for working with these dlib tools. If you haven’t done so already, you should probably look at the python example programs first before consulting this reference. These example programs are little mini-tutorials for using dlib from python. They are listed on the left of the main dlib web page.
This object represents a 1D array of floating point numbers. Moreover, it binds directly to the C++ type std::vector<double>.
cost.nr() == cost.nc() (i.e. the input must be a square matrix)
Interprets cost as a cost assignment matrix. That is, cost[i][j] represents the cost of assigning i to j.
Interprets assignment as a particular set of assignments. That is, i is assigned to assignment[i].
returns the cost of the given assignment. That is, returns a number which is:
sum over i: cost[i][assignment[i]]
This function performs a canonical correlation analysis between the vectors in L and R. That is, it finds two transformation matrices, Ltrans and Rtrans, such that row vectors in the transformed matrices L*Ltrans and R*Rtrans are as correlated as possible (note that in this notation we interpret L as a matrix with the input vectors in its rows). Note also that this function tries to find transformations which produce num_correlations dimensional output vectors.
Note that you can easily apply the transformation to a vector using apply_cca_transform(). So for example, like this:
- apply_cca_transform(Ltrans, some_sparse_vector)
returns a structure containing the Ltrans and Rtrans transformation matrices as well as the estimated correlations between elements of the transformed vectors.
This function assumes the data vectors in L and R have already been centered (i.e. we assume the vectors have zero means). However, in many cases it is fine to use uncentered data with cca(). But if it is important for your problem then you should center your data before passing it to cca().
This function works with reduced rank approximations of the L and R matrices. This makes it fast when working with large matrices. In particular, we use the dlib::svd_fast() routine to find reduced rank representations of the input matrices by calling it as follows: svd_fast(L, U,D,V, num_correlations+extra_rank, q) and similarly for R. This means that you can use the extra_rank and q arguments to cca() to influence the accuracy of the reduced rank approximation. However, the default values should work fine for most problems.
This function performs the ridge regression version of Canonical Correlation Analysis when regularization is set to a value > 0. In particular, larger values indicate the solution should be more heavily regularized. This can be useful when the dimensionality of the data is larger than the number of samples.
A good discussion of CCA can be found in the paper “Canonical Correlation Analysis” by David Weenink. In particular, this function is implemented using equations 29 and 30 from his paper. We also use the idea of doing CCA on a reduced rank approximation of L and R as suggested by Paramveer S. Dhillon in his paper “Two Step CCA: A new spectral method for estimating vector models of words”.
cross_validate_ranking_trainer( (svm_rank_trainer_sparse)trainer, (sparse_ranking_pairs)samples, (int)folds) -> _ranking_test
cross_validate_sequence_segmenter( (sparse_vectorss)samples, (rangess)segments, (int)folds [, (segmenter_params)params=<BIO,highFeats,signed,win=5,threads=4,eps=0.1,cache=40,non-verbose,C=100>]) -> segmenter_test
cross_validate_trainer( (svm_c_trainer_sparse_radial_basis)trainer, (sparse_vectors)x, (array)y, (int)folds) -> _binary_test
cross_validate_trainer( (svm_c_trainer_histogram_intersection)trainer, (vectors)x, (array)y, (int)folds) -> _binary_test
cross_validate_trainer( (svm_c_trainer_sparse_histogram_intersection)trainer, (sparse_vectors)x, (array)y, (int)folds) -> _binary_test
cross_validate_trainer( (svm_c_trainer_linear)trainer, (vectors)x, (array)y, (int)folds) -> _binary_test
cross_validate_trainer( (svm_c_trainer_sparse_linear)trainer, (sparse_vectors)x, (array)y, (int)folds) -> _binary_test
cross_validate_trainer_threaded( (svm_c_trainer_sparse_radial_basis)trainer, (sparse_vectors)x, (array)y, (int)folds, (int)num_threads) -> _binary_test
cross_validate_trainer_threaded( (svm_c_trainer_histogram_intersection)trainer, (vectors)x, (array)y, (int)folds, (int)num_threads) -> _binary_test
cross_validate_trainer_threaded( (svm_c_trainer_sparse_histogram_intersection)trainer, (sparse_vectors)x, (array)y, (int)folds, (int)num_threads) -> _binary_test
cross_validate_trainer_threaded( (svm_c_trainer_linear)trainer, (vectors)x, (array)y, (int)folds, (int)num_threads) -> _binary_test
cross_validate_trainer_threaded( (svm_c_trainer_sparse_linear)trainer, (sparse_vectors)x, (array)y, (int)folds, (int)num_threads) -> _binary_test
Compute the dot product between two dense column vectors.
This function modifies its argument so that it is a properly sorted sparse vector. This means that the elements of the sparse vector will be ordered so that pairs with smaller indices come first. Additionally, there won’t be any pairs with identical indices. If such pairs were present in the input sparse vector then their values will be added together and only one pair with their index will be present in the output.
This object represents a dense 2D matrix of floating point numbers.Moreover, it binds directly to the C++ type dlib::matrix<double>.
Return the number of columns in the matrix.
Return the number of rows in the matrix.
Set the size of the matrix to the given number of rows and columns.
Finds and returns the solution to the following optimization problem:
Maximize: f(A) == assignment_cost(cost, A) Subject to the following constraints:
- The elements of A are unique. That is, there aren’t any elements of A which are equal.
- len(A) == cost.nr()
Note that this function converts the input cost matrix into a 64bit fixed point representation. Therefore, you should make sure that the values in your cost matrix can be accurately represented by 64bit fixed point values. If this is not the case then the solution my become inaccurate due to rounding error. In general, this function will work properly when the ratio of the largest to the smallest value in cost is no more than about 1e16.
This object is used to represent the elements of a sparse_vector.
This field represents the index/dimension number.
This field contains the value in a vector at dimension specified by the first field.
This object is used to represent a range of elements in an array.
The index of the first element in the range.
One past the index of the last element in the range.
This object is an array of range objects.
This object is an array of arrays of range objects.
This class is used to define all the optional parameters to the train_sequence_segmenter() and cross_validate_sequence_segmenter() routines.
SVM C parameter
This object is the output of the dlib.test_sequence_segmenter() and dlib.cross_validate_sequence_segmenter() routines.
This object represents a sequence segmenter and is the type of object returned by the dlib.train_sequence_segmenter() routine.
This function solves a structural SVM problem and returns the weight vector that defines the solution. See the example program python_examples/svm_struct.py for documentation about how to create a proper problem object.
This object represents the mathematical idea of a sparse column vector. It is simply an array of dlib.pair objects, each representing an index/value pair in the vector. Any elements of the vector which are missing are implicitly set to zero.
Unless otherwise noted, any routines taking a sparse_vector assume the sparse vector is sorted and has unique elements. That is, the index values of the pairs in a sparse_vector should be listed in increasing order and there should not be duplicates. However, some functions work with “unsorted” sparse vectors. These are dlib.sparse_vector objects that have either duplicate entries or non-sorted index values. Note further that you can convert an “unsorted” sparse_vector into a properly sorted sparse vector by calling dlib.make_sparse_vector() on it.
This object is an array of sparse_vector objects.
This object is an array of arrays of sparse_vector objects.
train( (svm_rank_trainer)arg1, (ranking_pairs)arg2) -> _decision_function_linear
train( (svm_rank_trainer_sparse)arg1, (sparse_ranking_pairs)arg2) -> _decision_function_sparse_linear
test_binary_decision_function( (_decision_function_sparse_linear)function, (sparse_vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_radial_basis)function, (vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_sparse_radial_basis)function, (sparse_vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_polynomial)function, (vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_sparse_polynomial)function, (sparse_vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_histogram_intersection)function, (vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_sparse_histogram_intersection)function, (sparse_vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_sigmoid)function, (vectors)samples, (array)labels) -> _binary_test
test_binary_decision_function( (_decision_function_sparse_sigmoid)function, (sparse_vectors)samples, (array)labels) -> _binary_test
test_ranking_function( (_decision_function_sparse_linear)function, (sparse_ranking_pairs)samples) -> _ranking_test
test_ranking_function( (_decision_function_linear)function, (ranking_pair)sample) -> _ranking_test
test_ranking_function( (_decision_function_sparse_linear)function, (sparse_ranking_pair)sample) -> _ranking_test
test_regression_function( (_decision_function_sparse_linear)function, (sparse_vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_radial_basis)function, (vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_sparse_radial_basis)function, (sparse_vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_histogram_intersection)function, (vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_sparse_histogram_intersection)function, (sparse_vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_sigmoid)function, (vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_sparse_sigmoid)function, (sparse_vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_polynomial)function, (vectors)samples, (array)targets) -> _regression_test
test_regression_function( (_decision_function_sparse_polynomial)function, (sparse_vectors)samples, (array)targets) -> _regression_test
test_sequence_segmenter( (segmenter_type)arg1, (sparse_vectorss)arg2, (rangess)arg3) -> segmenter_test
train_sequence_segmenter( (sparse_vectorss)samples, (rangess)segments [, (segmenter_params)params=<BIO,highFeats,signed,win=5,threads=4,eps=0.1,cache=40,non-verbose,C=100>]) -> segmenter_type
This object represents the mathematical idea of a column vector.