Blog

Articles about computational science and data science, neuroscience, and open source solutions. Personal stories are filed under Weekend Stories. Browse all topics here. All posts are CC BY-NC-SA licensed unless otherwise stated. Feel free to share, remix, and adapt the content as long as you give appropriate credit and distribute your contributions under the same license.

tags ·  RSS ·  Mastodon ·  simple view · page 4/15

 

Wasserstein GANs

posted:
We apply the Wasserstein distance to Generative Adversarial Networks (GANs) to train them more effectively. We compare a default GAN with a Wasserstein GAN (WGAN) trained on the MNIST dataset and discuss the advantages and disadvantages of both approaches.

Probability distance metrics in machine learning

posted:
Probabilistic distance metrics play a crucial role in a broad range of machine learning tasks, including clustering, classification, and information retrieval. The choice of metric is often determined by the specific requirements of the task at hand, with each having unique strengths and characteristics. In this post, we discuss five commonly used metrics: the Wasserstein Distance, the Kullback-Leibler Divergence (KL Divergence), the Jensen-Shannon Divergence (JS Divergence), the Total Variation Distance (TV Distance), and the Bhattacharyya Distance.

Comparing Wasserstein distance, sliced Wasserstein distance, and L2 norm

posted:
In machine learning, especially when dealing with probability distributions or deep generative models, different metrics are used to quantify the ‘distance’ between two distributions. Among these, the Wasserstein distance (EMD), sliced Wasserstein distance (SWD), and the L2 norm, play an important role. Here, we compare these metrics and discuss their advantages and disadvantages.

Approximating the Wasserstein distance with cumulative distribution functions

posted:
In the previous two posts, we’ve discussed the mathematical details of the Wasserstein distance, exploring its formal definition, its computation through linear programming and the Sinkhorn algorithm. In this post, we take a different approach by approximating the Wasserstein distance with cumulative distribution functions (CDF), providing a more intuitive understanding of the metric.

Wasserstein distance via entropy regularization (Sinkhorn algorithm)

posted: updated:
Calculating the Wasserstein distance can be computational costly when using linear programming. The Sinkhorn algorithm provides a computationally efficient method for approximating the Wasserstein distance, making it a practical choice for many applications, especially for large datasets.

Wasserstein distance and optimal transport

posted:
The Wasserstein distance, also known as the Earth Mover’s Distance (EMD), provides a robust and insightful approach for comparing probability distributions and finds application in various fields such as machine learning, data science, image processing, and information theory. In this post, we take a look at the optimal transport problem, required to calculate the Wasserstein distance, and how to calculate the distance metric in Python.

Visualizing Occam’s Razor through machine learning

posted:
Here, we illustrate the concept of Occam’s Razor, a principle advocating for simplicity, by examining its manifestation in the domain of machine learning using Python.

Mamba vs. Conda: Unleashing lightning-fast Python package installations

posted: updated:
If you’ve ever experienced the frustration of waiting for ages while installing Python packages with conda, there’s a game-changer I wish I’d heard about earlier: Mamba. This lightning-fast package manager surprised me with its incredible speed, making package installations a breeze. Here is my personal experience and why Mamba is the speed demon you may have been looking for.

Integrate and Fire Model: A simple neuronal model

posted: updated:
In this post we explore the Integrate-and-Fire model, a simplified representation of a neuron. We also run some simulations in Python to understand the model dynamics.

Assessing animal behavior with machine learning: New DeepLabCut tutorial

posted:
I have added a hands-on tutorial to the Assessing Animal Behavior lecture. The tutorial covers the GUI-based use of DeepLabCut, a popular open-source software package for markerless pose estimation of animals. The target group is neuroscience students with no or little programming knowledge. Feel free to share the tutorial with students or colleagues who might be interested in using DeepLabCut for their own projects.

updated: