Planet maemo: category "feed:46b1d6b26651a331cde2ad188d699e0c"

mblondel

Large-scale sparse multiclass classification

2013-05-12 12:52 UTC  by  mblondel
0
0

I’m thrilled to announce that my paper “Block Coordinate Descent Algorithms for Large-scale Sparse Multiclass Classification” (published in the Machine Learning journal) is now online: PDF, BibTeX [*].

Abstract

Over the past decade, l1 regularization has emerged as a powerful way to learn classifiers with implicit feature selection. More recently, mixed-norm (e.g., l1/l2) regularization has been utilized as a way to select entire groups of features. In this paper, we propose a novel direct multiclass formulation specifically designed for large-scale and high-dimensional problems such as document classification. Based on a multiclass extension of the squared hinge loss, our formulation employs l1/l2 regularization so as to force weights corresponding to the same features to be zero across all classes, resulting in compact and fast-to-evaluate multiclass models. For optimization, we employ two globally-convergent variants of block coordinate descent, one with line search (Tseng and Yun, 2009) and the other without (Richtárik and Takáč, 2012). We present the two variants in a unified manner and develop the core components needed to efficiently solve our formulation. The end result is a couple of block coordinate descent algorithms specifically tailored to our multiclass formulation. Experimentally, we show that block coordinate descent performs favorably to other solvers such as FOBOS, FISTA and SpaRSA. Furthermore, we show that our formulation obtains very compact multiclass models and outperforms l1/l2- regularized multiclass logistic regression in terms of training speed, while achieving comparable test accuracy.

Code

The code of the proposed multiclass method is available in my Python/Cython machine learning library, lightning. Below is an example of how to use it on the News20 dataset.

from sklearn.datasets import fetch_20newsgroups_vectorized
from lightning.primal_cd import CDClassifier
 
bunch = fetch_20newsgroups_vectorized(subset="all")
X = bunch.data
y = bunch.target
 
clf = CDClassifier(penalty="l1/l2",
                   loss="squared_hinge",
                   multiclass=True,
                   max_iter=20,
                   alpha=1e-4,
                   C=1.0 / X.shape[0],
                   tol=1e-3)
clf.fit(X, y)
# accuracy
print clf.score(X, y) 
# percentage of selected features
print clf.n_nonzero(percentage=True)

To use the variant without line search (as presented in the paper), add the max_steps=0 option to CDClassifier.

Data

I also released the Amazon7 dataset used in the paper. It contains 1,362,109 reviews of Amazon products. Each review may belong to one of 7 categories (apparel, book, dvd, electronics, kitchen & housewares, music, video) and is represented as a 262,144-dimensional vector. It is, to my knowledge, one of the largest publically available multiclass classification dataset.

[*] The final publication is available here.

Categories: General
mblondel

Transparent system-wide proxy

2011-06-26 23:18 UTC  by  mblondel
0
0

Proxies can be a powerful way to enforce anonymity or to bypass various kinds of restrictions on Internet (government censorship, regional contents, …). In this post, I’ll describe a simple technique to create a transparent proxy at the system level. It’s especially useful in cases when you want to make sure that all connections make it through the proxy or when your application of interest doesn’t have proxy support. I don’t think this technique is that well-known, hence this post.

1. Create SOCKS proxy with OpenSSH

If you have a user account on a remote machine, the simplest way to create a proxy is to use the following command.

$ ssh -N -D 1080 username@serverhost

It creates a SOCKS proxy listening to port 1080 on your local machine. The main advantage of the SOCKS protocol is that it can route connections from any port between the client and the server.

2. Forward connection transparently with iptables or ipfw

Most modern browsers can let the user define a SOCKS proxy in their advanced networking preference section. Yet, many applications may not support the SOCKS protocol at all. The solution is to use system tools such as iptables (Linux) or ipfw (FreeBSD, OS X) to enforce the routing at the system level. For example, on OS X, I use the following command to redirect port 80 (HTTP).

$ sudo ipfw add 100 fwd 127.0.0.1,12345 dst-port 80

3. Redirect connections to SOCKS proxy with redsocks

iptables and ipfw don’t have built-in support for the SOCKS protocol. It is thus necessary to use an additional program to transform the connections on the fly. This is where redsocks comes in.

$ redsocks -c config_file

In the configuration file, you need to configure redsocks to listen to port 12345 and to redirect to port 1080. “generic” can be used as the redirector option. The github repository includes a sample configuration file.

And voila! We now have configured the system to transparently route connections for us. To summarize, here is the big picture:

local machine -> redsocks -> SOCKS proxy -> target server
Categories: Sysadmin
mblondel

Regularized Least Squares

2011-02-09 14:20 UTC  by  mblondel
0
0

Recently, I’ve contributed a bunch of improvements (sparse matrix support, classification, generalized cross-validation) to the ridge module in scikits.learn. Since I’m receiving good feedback regarding my posts on Machine Learning, I’m taking this as an opportunity to summarize some important points about Regularized Least Squares, and more precisely Ridge regression.

Click to read 1566 more words
Categories: Machine Learning
mblondel

Kernel Perceptron in Python

2010-10-31 05:36 UTC  by  mblondel
0
0

The Perceptron (Rosenblatt, 1957) is one of the oldest and simplest Machine Learning algorithms. It’s also trivial to kernelize, which makes it an ideal candidate to gain insights on kernel methods.

Click to read 1036 more words
Categories: Machine Learning
mblondel

Support Vector Machines in Python

2010-09-19 14:07 UTC  by  mblondel
0
0

Support Vector Machines (SVM) are the state-of-the-art classifier in many applications and have become ubiquitous thanks to the wealth of open-source libraries implementing them. However, you learn a lot more by actually doing than by just reading, so let’s play a little bit with SVM in Python! To make it easier to read, we will use the same notation as in Christopher Bishop’s book “Pattern Recognition and Machine Learning”.

Click to read 2240 more words
Categories: Machine Learning
mblondel

Latent Dirichlet Allocation in Python

2010-08-21 20:52 UTC  by  mblondel
0
0

Like Latent Semantic Analysis (LSA) and probabilistic LSA (pLSA) – see my previous post “LSA and pLSA in Python“, Latent Dirichlet Allocation (LDA) is an algorithm which, given a collection of documents and nothing more (no supervision needed), can uncover the “topics” expressed by documents in that collection. LDA can be seen as a Bayesian extension of pLSA.

As Blei, the author of LDA, points out, the topic proportions in pLSA are tied with the training documents. This is problematic: 1) the number of parameters grows linearly with the number of training documents, which can cause serious overfitting 2) it is difficult to generalize to new documents and requires so-called “folding-in”. LDA fixes those issues by being a fully generative model: where pLSA uses a matrix of P(topic|document) probabilities, LDA uses a distribution over topics.

Click to read 1754 more words
Categories: Machine Learning
mblondel

Semi-supervised Naive Bayes in Python

2010-06-21 17:47 UTC  by  mblondel
0
0

Expectation-Maximization

The Expectation-Maximization (EM) algorithm is a popular algorithm in statistics and machine learning to estimate the parameters of a model that depends on latent variables. (A latent variable is a variable that is not expressed in the dataset and thus that you can’t directly count. For example, in pLSA, the document topics z are latent variables.) EM is very intuitive. It works by pretending that we know what we’re looking for: the model parameters. First, we make an initial guess, which can be either random or “our best bet”. Then, in the E-step, we use our current model parameters to estimate some “measures”, the ones we would have used to compute the parameters, had they been available to us. In the M-step, we use these measures to compute the model parameters. The beauty of EM is that by iteratively repeating these two steps, the algorithm will provably converge to a local maximum for the likelihood that the model generated the data.

Click to read 848 more words
Categories: Machine Learning
mblondel

LSA and pLSA in Python

2010-06-13 17:42 UTC  by  mblondel
0
0

Latent Semantic Analysis (LSA) and its probabilistic counterpart pLSA are two well known techniques in Natural Language Processing that aim to analyze the co-occurrences of terms in a corpus of documents in order to find hidden/latent factors, regarded as topics or concepts. Since the number of topics/concepts is usually greatly inferior to the number of words and since it is not necessary to know the document categories/classes, LSA and pLSA are thus unsupervised dimensionality reduction techniques. Applications include information retrieval, document classification and collaborative filtering.

Note: LSA and pLSA are also known in the Information Retrieval community as LSI and pLSI, where I stands for Indexing.

Click to read 1198 more words
Categories: Machine Learning
mblondel

Seam Carving in Python

2010-02-09 15:57 UTC  by  mblondel
0
0

Seam Carving is an algorithm for image resizing introduced in 2007 by S. Avidan and A. Shamir in their paper “Seam Carving for Content-Aware Image Resizing“.


Miyako Island, Okinawa, Japan.

The principle is very simple. Find the connected paths of low energy pixels (“the seams”). This can be done efficiently by dynamic programming (see my post on DTW).


Same image in the gradient domain showing the vertical and horizontal seams of lowest cumulated energy.

The seams of lowest cumulated energy can be seen as the pixels contributing the least to an image. By repeatedly removing or adding seams, it is thus possible to perform “content-aware” image reduction or extension. The resulting images feel more natural, less “streched”.


Height reduced by 50% by seam carving.


Height reduced by 50% by traditional rescaling.

Although seam carving doesn’t need human intervention, in the original paper, a graphical user interface (GUI) was also developed to let the user define areas that can’t be removed, or conversely, that must be removed.

In my opinion, seam carving is simple and elegant. No sophisticated object recognition algorithm was used, yet the results are quite impressive.

You can find my implementation in 250 lines of Python in my git repo:

$ git clone http://www.mblondel.org/code/seam-carving.git

web interface

Unfortunately, it’s too slow to be real-time.

Categories: Image Processing
mblondel

Easy parallelization with data decomposition

2009-11-27 18:17 UTC  by  mblondel
0
0

Recently I came across this blog post which introduced me to the new multiprocessing module in Python 2.6, a module to execute multiple concurrent processes. It makes parallelizing your programs very easy. The author also provided a smart code snippet that makes using multiprocessing even easier. I studied how the snippet works and I came up with an alternative solution which is in my opinion very elegant and easy to read. I’m so excited about the new possibilities provided by this module that I had to spread the word. But first, off to some background.

Click to read 2982 more words
Categories: Python
mblondel

First release of Tegaki

2009-02-11 22:33 UTC  by  mblondel
0
0

Today I’m releasing Tegaki 0.1. Tegaki is an ongoing project which aims to develop a free and open-source modern implementation of handwriting recognition software, that is suitable for both the desktop and mobile devices, and that is designed from the ground up to work well with Chinese and Japanese.

Screencast video: ogg or youtube.

This release features desktop and SCIM integration. However, the main “innovation” brought to you by this release is the user interface. It uses two drawing areas for continuous writing. The user can eventually fix recognition errors by choosing alternative candidates or editing characters. Since a video is worth a thousand words, see the screencast above. This interface is largely inspired from the Nintendo DS game “Kanji Sono Mama Rakubiki Jiten” (漢字そのまま楽引辞典).

Tegaki is designed to be able to use several recognition engines. However so far it only supports Zinnia, which is the only recognition engine that I know with acceptable recognition accuracy and good performance on mobile devices. One challenge of the project in the future will be to create a new recognition engine that can yield better results than Zinnia.

A take that I have on this project is to use Python whenever this is possible and only use C or C++ when performance is critical, like in recognition engines. Compared to Tomoe, which implements everything in C and provides bindings for several languages, this means less reusability of the components but I hope this will make the project go forward faster.

There are still a lot of things that can be done in various areas but I really wanted to release the code I’ve put together so far because I think it can already be useful to end-users. By the way, Maemo supports both pygtk and SCIM through third-party projects, thus Tegaki is just a few Debian packages away from being available on Maemo.

For further details:
http://tegaki.sourceforge.net/

Categories: Projects
mblondel

Linux in a Virtual Machine

2008-12-26 12:39 UTC  by  mblondel
0
0

I own a Macbook on which I’ve been running Linux 99% of the time for over a year now. Although a Macbook is not necessarily the best choice to run Linux, I made that decision because installing Linux on a Macbook is very well documented. However, as far as you can get, it’s always difficult to get a configuration you are 100% happy with (no subwoofer support, flaky suspend…). With recent advances in virtualization technologies, both in software and hardware, I’ve been willing to test running Linux and Windows (the guest OSes) inside Mac OS X (the host OS).

Click to read 2150 more words
Categories: Sysadmin